Why not build activist communities via White House petitions?

Posted: April 26th, 2012 | Author: | Filed under: Civic engagement, Open Government, Technology, Transparency | 1 Comment »

Is that all there is?

You’ve landed on a Whitehouse.gov petition on an issue that’s close to your heart, and you’re thrilled that’s it’s finally getting some visibility. Of course, you sign the petition, but then you wonder Who are these people? What else can I do? How can I get plugged in?. In way too many cases, the people who started the petition leave you to your wits, and of course to the Google. It’s like a 3am infomercial without the 800-number. What were they thinking?

My inspiration and goad to build tools for the White House “We the People” petition site was the story of an activist who had, in effect, lost her work gathering signatures when she failed to reach the necessary threshold after a month of work.

Along the way, I wondered how often petition initiators added links to their petition text, to provide more information to potential signers or to supplement their work on whitehouse.gov by building a community on a site that gave them more control.

“We The People”-scope

To investigate, I’ve built a live, interactive database of petitions currently visible and open for signatures at the White House. How well are petitioneers using WhiteHouse.gov traffic and visibility to build activist communities? The results aren’t pretty.

90% of the time, you’re on your own

Of the 39 petitions open for signatures this morning, only four include links:

  1. a request for funding of an MIT anti-viral drug links to a press release providing further information
  2. a call for legislation implementing various economic and legal reforms (NESARA) links to an activist website and to a religious/New Age Ning community
  3. a call for increased funding for NASA includes a reference to a website for that issue campaign, and
  4. a request that the Administration veto any legislation that extends tax cuts for the highest earners includes a reference to MoveOn.Org.

None of the links (actually in plain text, since the petition site doesn’t allow hot links) make it easy to plug in to community. The NASA funding campaign website is focussed and includes further calls to action, but does not provide a community forum or a mailing list sign-up. The NESARA-related websites provide a wealth of information and, via Ning, a community. However, I could not see how I might easily connect with other supporters of the linked petition. MoveOn.org is a major activist community, but nothing on its home page references the current tax-related petition.

So, of 39 petitions, only three provide links that would allow a signer to tap into a larger community, discuss the petition, and monitor progress, and even those three links are muddy.

The opportunity

What if, instead, a petition linked to a well-designed landing page that encouraged people to sign up to track the progress of the petition, support the cause via other actions, and connect with fellow activists. It’s a missed opportunity.

And there’s more!

(The petition overview can be filtered and sorted in many different ways. For instance, you can highlight the backlog of petitions that have met their signature goal but don’t yet have an official Administration response, or focus just on the petitions for civil liberties, human rights, or immigration issues – almost half of the total currently open.)


On contraception mandate, Nebraska a big NO, DC a bigger YES, in White House petitions

Posted: March 1st, 2012 | Author: | Filed under: Civic engagement, Open Government, Technology, Transparency | 4 Comments »

Last week, I used a trial run of a new “petition scraping” plugin I’ve developed to see which states most strongly supported the recent White House petition that requested the Administration to rescind the health care reform contraception mandate for Catholic employers.

Today, I can add a second, opposing petition to the analysis. It urged the Administration to “stand strong” on the no cost birth control requirement. From what I’ve seen on the petition site, this is unusual – some petitions garner few signatures, but very few petitions are arranged in pro/con pairs. We can take advantage of this “natural experiment” to compare state responses on either side of the issue. (Signature data for the “Stand strong” petition can be downloaded at the csv link below.)

Nebraska, North Dakota, Kansas Against; DC Engaged

The outliers, highlighted in red in the chart, are the story.

Kansas, North Dakota, and Nebraska – in the bottom right – showed significantly stronger support for the Rescind petition, at 309, 367, and 487 signatures per million, than the national average of 117. In contrast, their support for Stand Strong was fairly close to the national average of 89 per million – they’re well within the “cluster” on the left axis.

Something even more interesting is going on in DC. At 509 signatures per million, it is the standout supporter for Stand Strong. But notice that, at 229 signatures per million, it looks a lot like Kansas, North Dakota, and Nebraska in per capita support for Rescind. (I’d guess that DC’s intensity reflects the pro-contraception response by longterm residents combined with combined with the response from advocacy groups on both sides.)

This table  provides the details for the chart above:

Download as csv file (4k).

The animated map shows how signatures flowed from each state, normalized by its population, with the petition “closing” on February 10. Click on the slider to see how each state contributed signatures starting on February 3.
Fullscreen

Petition similarties

In many respects, signers responded similarly to both petitions:

  Stand Strong  Rescind
Signatures 22,945 29,127
Days to reach 500 signatures (visibility threshold) 4 3
Days to reach 50 states and DC 4 4
Percent of signers not providing a place 17% 17%

 
Download the “Stand strong” signature details as a csv file (660k).

See the previous post in the series for details on the Rescind petition and more information on the mechanics of petitions and signatures at whitehouse.gov.

A note on statistics

Some of the variation of a particular state’s response with respect to the US will be due to chance, rather than a fundamental difference in this state’s political leanings vs the US. For instance, weather patterns or state preoccupation with a sports event might have reduced Mississippi’s engagement; it might generate more signatures per capita on similar petitions at another time.

Statisticians measure how much an indicator departs from the average in standard deviations. The standard deviation captures the variability of a set of numbers. In the case of the petition signatures, if the response of a particular state differed from the US average by less than two standard deviations, e.g. Mississippi, this could occur by chance more than 5% of the time.

The bottom left quadrant contains all the states within two standard deviations of the US average.

The remaining three states and DC are outliers indeed. On the Rescind petition, Kansas’s response is more than two, North Dakota’s almost three, and Nebraska’s more than four standard deviations above the US average. This would occur by chance less than 5%, 0.3%, and 0.007% of the time — i.e., from rarely to never. DC’s response to Stand Strong is 5.7 standard deviations away from the mean, which would occur 0.00001% of the time by chance.

The outliers, in other words, are radically more engaged in these respective petitions than the rest of the country.


Liberating signatures from White House petitions – a new tool for activists

Posted: February 23rd, 2012 | Author: | Filed under: Civic engagement, Open Government, Technology, Transparency | 2 Comments »

Open government often carries a significant risk to activists: organizing efforts may be “locked up” by Federal agencies, even with the best intentions.

The rewards, and risks, of squirreling away

Consider Terra Ziporyn Snider. Last fall, she initiated a petition on the White House website requesting changes in school start times. Then current White House rules required that she gather 5000 signatures within 30 days in order to keep the petition on the site. She and her supporters encountered various technical problems – intermittent site outages, difficulty in signing up new users — and as the deadline approached, with 1575 signatures recorded, Dr. Snider realized that the fruit of her efforts was about to be digitally vanished, per White House rules.

I was taken by the lengths she went to transplant the list of names she had built at the White House to Move-On. Though these signatures had been gathered through her online organizing efforts, and of course were stored on a hard drive somewhere in the whitehouse.gov domain, she had no way to access these data easily. Instead, she printed out the petition and, apparently, retyped the signatures by hand.

Imagine a New England squirrel storing away nuts for next winter in the trunk of a tourist’s car – it’s secure today, tomorrow, and perhaps even next week, but when the snow comes, the car — and the nuts — are in Florida, and the squirrel starves.

Of course, in the case of White House petitions, the potential of reaching a wider audience for your cause and of getting the attention of the Obama administration may make this risk worthwhile.

But, as Dr. Snider found, sometimes the car drives away, and you’ve got to scramble not to be left empty-handed. How could one reduce the risk, particularly for activists who don’t know or don’t have access to sites live Move-On?

A tool for liberating signatures

Her story was my inspiration to build a page scraping toolset that would allow activists to get the benefits of White House web petitions, while reducing the risk.

The toolset for liberating signatures from White House petitions is almost complete. It’s a browser-based javascript plugin that automates signature capture with some shortcuts to reduce loads on the petition server.

As a proof of concept, I set the plugin to work on two recent petitions. The first requests that the Administration rescind regulations mandating that religious institutions provide contraceptive coverage under their employee health insurance plans, even if this is contrary to their religious precepts. The second urges the Administration to “stand strong” in maintaining this mandate.

In this post, I’ll focus on the signature data from the “Rescind” petition. A follow-up post will provide details on the “Stand strong” petition signatures. [Update 3/2/2012: I’ve posted the “Stand strong” results and a state by state comparison.}

If you’re familiar with the mechanics of the White House petition site, skip to the next section for the results.

The mechanics of initiating and signing petitions

Anyone may initiate a petition. Per the rules currently in force, new petitions are visible only to those web visitors who already know the specific petition URL. Once a petition receives its 150th signature, it is listed by the White House in the index of current petitions and can be found by appropriate search terms.

The initiator and all other petition supporters then have 30 days to gather at least 25000 signatures in total. If they fail to reach that threshold within 30 days, the petition disappears, as Dr Snider experienced. If 25000 people “sign” the petition, the White House promises to post a response.

A WhiteHouse.gov login is required to sign a petition. To get a login, you provide your email address, your name, and, optionally, city, state and zipcode. Each petition signature block includes the first name and last initial, the signature’s order in the overall total, the date the signature was provided, and the city and state of the signer, if provided. For instance:

Chris B
Washington, DC
February 22, 2012
Signature # 1,079

 

Example: Signature data from the “Rescind” petition

From January 28 through February 10, the Rescind petition gathered 29,127 signatures. The inset below displays the raw data gathered by the page scraping toolset, which can also be downloaded by the link at the bottom of the table.

This animated map shows how the signatures flowed in from January 28 to February 10, normalized by the population of each state.

Fullscreen
The shading of each state shows the number of signatures provided by that state, to date, for each million residents, based on 2010 Census results. States that “boxed above their weight” are shown in darker green; states that were underrepresented, in very light green. Since the figures are cumulative, each state grows somewhat darker over time. Click on the slider at the bottom of the map to show how the signatures accumulated over time.

Signature  data for the Rescind petition

Download as csv file (1.1 mb).

More detailed analysis shows

  • that the petition took off pretty quickly – two signatures on day one, 32 on day two, 1,386 on day three (reaching all fifty states and DC)
  • that state participation varied dramatically: Nebraska provided more than 480 signatures per million residents, North Dakota more than 360, while Mississippi provided just 31 signatures per million inhabitants. (Overall, across the US, 127 signatures were provided per million residents, if we assume that all signatures came from the United States.)
  • 4,951 signatures were provided by people who did not state their location — which shows up as NULL in the data.

Next post: How does the signature flow for the “stand strong” petition compare? What surprises do we find when we compare states’ activity on these two opposing petitions? Are these signers as reluctant to provide place information as the Rescind signers?

A postscript for data geeks

Some curiosities:

  1. Each signature is numbered. The same sequence number may appear multiple times on the signatures page, indicating, presumably, two or more different people who signed almost simultaneously. For instance, if the third, fourth, and fifth signers all acted simultaneously, the first six signatures would be numbered 1, 2,3,3,3,6.
  2. One Rescind signer was particularly eager: he signed twice in quick succession, so his name and place show up twice in a row with the same sequence number. This anomaly can’t be tracked by the software currently, leading to a database count of 29,126 signatures, one less than shown on the White House site.
  3. In addition to the “no location” signatures, 28 people provided place information for military post offices or mistyped their place information.
  4. The scraping tool captures the first name and initial of the signer as shown on the petition, but I’ve omitted this column from the download for the moment.


Social media to improve public decision-making? Yes and no (and yes)

Posted: October 6th, 2010 | Author: | Filed under: crowdsourcing, Open Government | No Comments »

In his estimable blog, Tim Bonnemann asks “Can Social Media Be Utilized to Involve the Public in Making Better Decisions?”

My first four, increasingly accurate approximations to the right answer:
1. Yes.

2. No, if the underlying question is “Can I get the ‘public participation’ checkbox checked off by turning on a Twitter account?” (Not Tim’s underlying question, of course, but some people will read it this way.)

There is a lot of approach/avoidance ambivalence about social media: people see that one can set up an account or a fan page in an afternoon, but they also glimpse, more dimly, that a lot of work is required to make that useful. (Excel will help you estimate a project budget, but it won’t make the process fun or easy.)

3. No, if the real question is “Can I use social media for public involvement and still stay comfortably in control?” See Obama asking for questions via Google Moderator, only to be forced to discuss marijuana legalization . See Digg and the DVD encryption hack issue.

(Of course, social media processes can be shaped.)

4. Yes, once you understand that social media technology is a small part of the overall effort and you’ve rethought how much or little control you need over the whole process.

Bonus points for recognizing that the technology of social media is only the fourth most important concern, after you’ve identified the specific Public you want to involve, your Objectives in involving them, and your Strategy for doing so.


Describe now, define later – a better way to understand Life 2.0

Posted: September 18th, 2010 | Author: | Filed under: Civic engagement, crowdsourcing, Open Government | 1 Comment »

When I venture into new arenas – social media, crowdsourcing, online engagement –  I’m fed by discussions with people who are doing this work. But, too often, the conversation gets bogged down by arguments about definitions.

For instance, in a chat about crowdsourcing recently, someone offered online commenting on government regulations as an example, then someone else – it might have been me – almost derailed the conversation by asking whether commenting really counted as “crowdsourcing”.

The problem: the person who’s just run a rule-making process that received thousands of comments knows what she did, how participants responded, what seemed to work, what fell short. But she – and all the rest of us – are clueless about whether this really is crowdsourcing.

In general: when we’re describing an experience or process we’ve experienced, we know what we’re talking about. But when we debate whether this experience is an example of crowdsourcing, we don’t.

Bold claim on my part, I know. And, one day, we’ll be able to agree, quickly, on whether my friend’s process was crowdsourcing, or collective intelligence, or prediction markets, crowd-storming, or peer production or something else entirely.

Why? We’ll, collectively, have more experience, and we’ll have come to (some) agreement on who the authorities are, and there’ll be some benefit to the definitions.

But today we’re still groping, learning what’s been done, identifying new combinations that haven’t yet been tried but look promising.

In effect, we’re crowdsourcing the definition of  “crowdsourcing”

(And if you’re thinking that this advice applies to discussion about “social media”, “Gov2.0″, and “online engagement” as well, you’ve got my point exactly.)


Invest failure wisely to generate insights for better government

Posted: August 10th, 2010 | Author: | Filed under: Open Government | 1 Comment »

In brief: Government is continually confronted by challenges that can only be met by discovery, since existing knowledge is inadequate. Discovery relies, in part, on experimentation. Fruitful experimentation relies on a tolerance for significant failures .

Last week, Lovisa Williams wrote a well-received post, “Failure is not an option”, arguing that

Most civil servants … recognize we are in positions of public trust…. Therefore, we have developed a culture where failure is not considered an option. If we fail, then there could be serious consequences….

In order for Government to successfully evolve to the next generation of government, … we need to ensure we have established a means where we can continue to feed the evolution….

We do have things that don’t work as expected and absolutely fail, but we don’t talk about these things even within our own agencies. We are also missing the potential for us to start exploring other paths or opportunities earlier.

She’s onto something very important.

Government action takes place in many different contexts. When the problems are familiar, we have well-known solutions that produce predictable and satisfactory results, and the consequences of failure are high, it makes sense to go for predictable results and sanction failure.

But today we face challenges that are unprecedented, even baffling, some consequential, even earth-shattering – such as global warming, deep sea oil spills, deflation, Al Qaeda and other deadly yet ghostly foes – some merely puzzling – such as social media, generational changes, and new political constellations. And we don’t yet have satisfactory solutions.

So, failure, to some degree, is inevitable. I’d argue, with Lovisa, against the massive, glacially slow failures that, if they teach us at all, provide too little new information, too late for us to change course.

Instead, I’d argue for, well-designed experiments, where we recognize what we don’t know and invest effort (and to some degree, failure) to generate insights that arrive early enough to make a difference. Instead of “spending” failure slowly, covertly, and massively, let’s invest it openly, quickly, carefully, and in small amounts, to create new methods, opportunities, and success.

Notes:

    Eric Ries’s podcast on Lean Startups: Doing More with Less shows these ideas at work in business and entrepreneurship.

  • @digiphile’s tweet mashing up comment’s on Lovisa’s post with @marcidale’s notion of #AgileGov was helpful in shifting my thinking in this direction.
  • Then, Peter Norvig’s observation that “If you do experiments and you’re always right, then you aren’t getting enough information out of those experiments” put it all together. (Thanks to lesswrong.com for the pointer. )


Brownie points, or results?

Posted: May 20th, 2010 | Author: | Filed under: Metrics, Open Government | Tags: | 1 Comment »

Using the Gulf oil spill to get clear about measuring Open Government

Measure “Open Government”? Yes, but …

I think that the success of the Obama Administration’s Open Government effort is critical, but I’m put off, even bored, by the measurement discussions to date. Imagine that you’ve got good reason to believe that your nephew is the next Jackson Pollock, your niece the next Maya Lin, and then the first report back from their studios is” “he’s painted more than 500 square feet of canvas! she’s created two and a half tons of sculpture!” and you’ll know how I feel.

It’s as if someone brought a speedometer to a sunset.

In December, Beth Noveck, the leader of the Administration’s Open Government efforts, wrote that measures of Open Government would be useful as a way to hold the Administration’s “feet to the fire” and to ensure that proposed changes were implemented. She suggested a variety of measures, including:

  • The year to year percentage change of information published online in open, machine-readable formats;
  • The number of FOIA requests processed and percentage change in backlog
  • The creation of “data platforms” for sharing data across government
  • The successful posting of data that increases accountability and responsiveness, public knowledge of agency operations, mission effectiveness, or economic opportunity

(I’ve left the link in, but alas the page has disappeared.)

To be fair, it’s a tough problem. As the “Measuring Success” group from one of the Open Government Directive workshops noted, seemingly reasonable measures may be difficult to interpret, for instance: the time spent on a website might signal popular use … or that users were confused.

So let’s start again, this time from the bottom, up: if you were managing an Open Government effort, what would you want to measure? For instance…

Virtual USA

In Feb, 2009 , Homeland Security rolled out Virtual USA (vUSA) for the sharing of geospatial data between emergency response agencies, , “a national system of systems … so that disparate systems can communicate with each other”.  It will allow responders in multiple locations to coordinate their efforts with a common set of images and thereby reduce confusion and shift at least some activity away from phone calls. It is a bottoms-up collaboration between DHS, first responder groups, and eight southeastern states. The system is dependent in part on states and localities to provide data, and is locally controlled: The agency providing the data owns it, controls how and when and with whom it is shared, and can use its existing software to do so.

vUSA seems impressive:

Two more pilots are starting, covering eleven more states. And the user community at Firstreponder.gov has about 150 members.

The nearest level of management

Continuing with our exercise, imagine that you’re in charge of vUSA. You face decisions about which additional GIS standards and technologies to incorporate, how to divide resources between technology vs. additional outreach or training for participating states, and whether to reach out to additional Federal agencies, for instance, the Minerals Management Service, which had primary regulatory authority over the BP oil well.

To guide these decisions, you’d ask your staff these quantitative questions:
  • How much staff time in participating states has shifted from coordination via telephone to coordination via vUSA?
  • For what issues and data needs are participating states still using the phone?

and these qualitative ones:

  • What would have changed in the oil spill response if vUSA didn’t exist?
  • How does adoption and involvement differ between various agencies in the participating states and the various components of each agency?
  • Are response sites still using fax, couriers, or other workarounds to share information?

Big picture managers

Now zoom out a bit: imagine that you’re a senior manager at the Department of Homeland Security (DHS), with ultimate responsibility for vUSA but also many other programs.

Given your agency’s recent history with Katrina on the Gulf Coast, among other things, you’ll monitor how smoothly local, state, regional, and federal actors work together in dealing with emergencies and wonder whether staff increases (e.g. for liaison officers), training, or incentives would be more likely than technology (such as vUSA) to improve coordination. And you’d consider whether coordination should be addressed more broadly than geospatial information sharing, for instance to include the development of shared goals among the coordinating agencies or agreement on division of roles and responsibilities.

You’d ask the questions we’ve already considered, but you’ve got a broader range of responsibilities. The vUSA manager’s career will live or die by the success of that effort, but you’re worried about DHS’s success in general. Maybe there are better ideas and more worthwhile efforts than vUSA.

To assess this, you’d ask your staff to research these issues:

  • how eager are other states are to join the vUSA effort? (So the two additional pilots would be a good sign.)
  • How has vUSA affected the formulation of shared goals for the oil spill clean-up effort?
  • Is each agency involved playing the role that it is best suited for in the clean-up?
  • how has emergency response to the flooding in Tennessee, a participant in vUSA, differed from the response to flooding earlier this year in Minnesota and and North Dakota, states that don’t participate in vUSA?

The last question is an example of a “natural experiment”, a situation arising out of current events that allows you to compare crisis management and response assisted by vUSA vs. crisis management and response handled without vUSA, almost as well as you could with a controlled experiment.

You’d also have some quantitative questions for your staff, for instance: how have the FEMA regions participating in vUSA performed on FEMA’s overall FY 2009 Baseline Metrics from the agency’s Strategic Plan?

And back to “measuring open government”

Note how much more compelling these “close to the ground” measures are than the generic “Open Government” metrics. If you were told, this morning, that a seemingly minor vUSA glitch had forced the oil spill command center to put in extra phone lines, no one would have to interpret that measure for you: you’d already know what you’re going to focus on today. And if, as a senior manager, you had a report in front of you demonstrating that none of the dozen hiccups in the response to North Dakota’s flooding were repeated in the vUSA-assisted response to the Tennessee disaster, you might actually look forward to a Congressional hearing.

Two of the Open Government measures are relevant:

  1. vUSA is a new platform for sharing data across government.
  2. It’s certainly intended to increase DHS’s responsiveness and its effectiveness in carrying out its mission, though it appears that only some vUSA data are publicly available.

But these considerations would hardly be useful to the line manager, and they’d be useful to the agency’s senior managers mostly as checkboxes or brownie points when big Kahunas from OMB or the White House came to call.

Conclusions

Of course, if we had picked other Open Government efforts, we would have identified different measures, but there are some general lessons for the problem of developing Open Government metrics.

Get your hands dirty

Reviewing an actual program, rather than “Open Government” in the abstract, makes it easier to get a handle on what we might measure.

Decision requirements drive measurement needs

The line manager, about to decide whether to reach out first to EPA or MMS in expanding vUSA’s Federal footprint, will be eager to know how much back channels have been used to bring these two agencies into the oil spill cleanup. The GIS guru will want to know whether there’s something about mapping ocean currents that can’t be handled by vUSA’s existing standards.

Different decision-makers require different metrics

In contrast, DHS senior manager better not get lost in the weeds of GIS interoperability, but ought to be ever alert for signs that the whole vUSA effort misses the point.

In other words, when someone asks “what’s the right way to measure the success of this open government effort?”, the appropriate answer is “who wants to know?”.

Seek out natural experiments

Even with great measures, Open Government champions will always be confronted by the challenge of demonstrating that their project truly caused the successful result. A “natural experiment”, if you can find one, will go a long way towards addressing that challenge.