Describe now, define later – a better way to understand Life 2.0

Posted: September 18th, 2010 | Author: | Filed under: Civic engagement, crowdsourcing, Open Government | 1 Comment »

When I venture into new arenas – social media, crowdsourcing, online engagement –  I’m fed by discussions with people who are doing this work. But, too often, the conversation gets bogged down by arguments about definitions.

For instance, in a chat about crowdsourcing recently, someone offered online commenting on government regulations as an example, then someone else – it might have been me – almost derailed the conversation by asking whether commenting really counted as “crowdsourcing”.

The problem: the person who’s just run a rule-making process that received thousands of comments knows what she did, how participants responded, what seemed to work, what fell short. But she – and all the rest of us – are clueless about whether this really is crowdsourcing.

In general: when we’re describing an experience or process we’ve experienced, we know what we’re talking about. But when we debate whether this experience is an example of crowdsourcing, we don’t.

Bold claim on my part, I know. And, one day, we’ll be able to agree, quickly, on whether my friend’s process was crowdsourcing, or collective intelligence, or prediction markets, crowd-storming, or peer production or something else entirely.

Why? We’ll, collectively, have more experience, and we’ll have come to (some) agreement on who the authorities are, and there’ll be some benefit to the definitions.

But today we’re still groping, learning what’s been done, identifying new combinations that haven’t yet been tried but look promising.

In effect, we’re crowdsourcing the definition of  “crowdsourcing”

(And if you’re thinking that this advice applies to discussion about “social media”, “Gov2.0″, and “online engagement” as well, you’ve got my point exactly.)


Twitter Weekly Updates for 2010-08-30

Posted: August 30th, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Twitter Weekly Updates for 2010-08-23

Posted: August 23rd, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Twitter Weekly Updates for 2010-08-16

Posted: August 16th, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Invest failure wisely to generate insights for better government

Posted: August 10th, 2010 | Author: | Filed under: Open Government | 1 Comment »

In brief: Government is continually confronted by challenges that can only be met by discovery, since existing knowledge is inadequate. Discovery relies, in part, on experimentation. Fruitful experimentation relies on a tolerance for significant failures .

Last week, Lovisa Williams wrote a wellreceived post, “Failure is not an option”, arguing that

Most civil servants … recognize we are in positions of public trust…. Therefore, we have developed a culture where failure is not considered an option. If we fail, then there could be serious consequences….

In order for Government to successfully evolve to the next generation of government, … we need to ensure we have established a means where we can continue to feed the evolution….

We do have things that don’t work as expected and absolutely fail, but we don’t talk about these things even within our own agencies. We are also missing the potential for us to start exploring other paths or opportunities earlier.

She’s onto something very important.

Government action takes place in many different contexts. When the problems are familiar, we have well-known solutions that produce predictable and satisfactory results, and the consequences of failure are high, it makes sense to go for predictable results and sanction failure.

But today we face challenges that are unprecedented, even baffling, some consequential, even earth-shattering – such as global warming, deep sea oil spills, deflation, Al Qaeda and other deadly yet ghostly foes – some merely puzzling – such as social media, generational changes, and new political constellations. And we don’t yet have satisfactory solutions.

So, failure, to some degree, is inevitable. I’d argue, with Lovisa, against the massive, glacially slow failures that, if they teach us at all, provide too little new information, too late for us to change course.

Instead, I’d argue for, well-designed experiments, where we recognize what we don’t know and invest effort (and to some degree, failure) to generate insights that arrive early enough to make a difference. Instead of “spending” failure slowly, covertly, and massively, let’s invest it openly, quickly, carefully, and in small amounts, to create new methods, opportunities, and success.

Notes:

    Eric Ries’s podcast on Lean Startups: Doing More with Less shows these ideas at work in business and entrepreneurship.

  • @digiphile’s tweet mashing up comment’s on Lovisa’s post with @marcidale’s notion of #AgileGov was helpful in shifting my thinking in this direction.
  • Then, Peter Norvig’s observation that “If you do experiments and you’re always right, then you aren’t getting enough information out of those experiments” put it all together. (Thanks to lesswrong.com for the pointer. )


Twitter Weekly Updates for 2010-08-09

Posted: August 9th, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Twitter Weekly Updates for 2010-07-26

Posted: July 26th, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Twitter Weekly Updates for 2010-07-19

Posted: July 19th, 2010 | Author: | Filed under: Collected tweets | No Comments »

Powered by Twitter Tools


Brownie points, or results?

Posted: May 20th, 2010 | Author: | Filed under: Metrics, Open Government | Tags: | 1 Comment »

Using the Gulf oil spill to get clear about measuring Open Government

Measure “Open Government”? Yes, but …

I think that the success of the Obama Administration’s Open Government effort is critical, but I’m put off, even bored, by the measurement discussions to date. Imagine that you’ve got good reason to believe that your nephew is the next Jackson Pollock, your niece the next Maya Lin, and then the first report back from their studios is” “he’s painted more than 500 square feet of canvas! she’s created two and a half tons of sculpture!” and you’ll know how I feel.

It’s as if someone brought a speedometer to a sunset.

In December, Beth Noveck, the leader of the Administration’s Open Government efforts, wrote that measures of Open Government would be useful as a way to hold the Administration’s “feet to the fire” and to ensure that proposed changes were implemented. She suggested a variety of measures, including:

  • The year to year percentage change of information published online in open, machine-readable formats;
  • The number of FOIA requests processed and percentage change in backlog
  • The creation of “data platforms” for sharing data across government
  • The successful posting of data that increases accountability and responsiveness, public knowledge of agency operations, mission effectiveness, or economic opportunity

(I’ve left the link in, but alas the page has disappeared.)

To be fair, it’s a tough problem. As the “Measuring Success” group from one of the Open Government Directive workshops noted, seemingly reasonable measures may be difficult to interpret, for instance: the time spent on a website might signal popular use … or that users were confused.

So let’s start again, this time from the bottom, up: if you were managing an Open Government effort, what would you want to measure? For instance…

Virtual USA

In Feb, 2009 , Homeland Security rolled out Virtual USA (vUSA) for the sharing of geospatial data between emergency response agencies, , “a national system of systems … so that disparate systems can communicate with each other”.  It will allow responders in multiple locations to coordinate their efforts with a common set of images and thereby reduce confusion and shift at least some activity away from phone calls. It is a bottoms-up collaboration between DHS, first responder groups, and eight southeastern states. The system is dependent in part on states and localities to provide data, and is locally controlled: The agency providing the data owns it, controls how and when and with whom it is shared, and can use its existing software to do so.

vUSA seems impressive:

Two more pilots are starting, covering eleven more states. And the user community at Firstreponder.gov has about 150 members.

The nearest level of management

Continuing with our exercise, imagine that you’re in charge of vUSA. You face decisions about which additional GIS standards and technologies to incorporate, how to divide resources between technology vs. additional outreach or training for participating states, and whether to reach out to additional Federal agencies, for instance, the Minerals Management Service, which had primary regulatory authority over the BP oil well.

To guide these decisions, you’d ask your staff these quantitative questions:
  • How much staff time in participating states has shifted from coordination via telephone to coordination via vUSA?
  • For what issues and data needs are participating states still using the phone?

and these qualitative ones:

  • What would have changed in the oil spill response if vUSA didn’t exist?
  • How does adoption and involvement differ between various agencies in the participating states and the various components of each agency?
  • Are response sites still using fax, couriers, or other workarounds to share information?

Big picture managers

Now zoom out a bit: imagine that you’re a senior manager at the Department of Homeland Security (DHS), with ultimate responsibility for vUSA but also many other programs.

Given your agency’s recent history with Katrina on the Gulf Coast, among other things, you’ll monitor how smoothly local, state, regional, and federal actors work together in dealing with emergencies and wonder whether staff increases (e.g. for liaison officers), training, or incentives would be more likely than technology (such as vUSA) to improve coordination. And you’d consider whether coordination should be addressed more broadly than geospatial information sharing, for instance to include the development of shared goals among the coordinating agencies or agreement on division of roles and responsibilities.

You’d ask the questions we’ve already considered, but you’ve got a broader range of responsibilities. The vUSA manager’s career will live or die by the success of that effort, but you’re worried about DHS’s success in general. Maybe there are better ideas and more worthwhile efforts than vUSA.

To assess this, you’d ask your staff to research these issues:

  • how eager are other states are to join the vUSA effort? (So the two additional pilots would be a good sign.)
  • How has vUSA affected the formulation of shared goals for the oil spill clean-up effort?
  • Is each agency involved playing the role that it is best suited for in the clean-up?
  • how has emergency response to the flooding in Tennessee, a participant in vUSA, differed from the response to flooding earlier this year in Minnesota and and North Dakota, states that don’t participate in vUSA?

The last question is an example of a “natural experiment”, a situation arising out of current events that allows you to compare crisis management and response assisted by vUSA vs. crisis management and response handled without vUSA, almost as well as you could with a controlled experiment.

You’d also have some quantitative questions for your staff, for instance: how have the FEMA regions participating in vUSA performed on FEMA’s overall FY 2009 Baseline Metrics from the agency’s Strategic Plan?

And back to “measuring open government”

Note how much more compelling these “close to the ground” measures are than the generic “Open Government” metrics. If you were told, this morning, that a seemingly minor vUSA glitch had forced the oil spill command center to put in extra phone lines, no one would have to interpret that measure for you: you’d already know what you’re going to focus on today. And if, as a senior manager, you had a report in front of you demonstrating that none of the dozen hiccups in the response to North Dakota’s flooding were repeated in the vUSA-assisted response to the Tennessee disaster, you might actually look forward to a Congressional hearing.

Two of the Open Government measures are relevant:

  1. vUSA is a new platform for sharing data across government.
  2. It’s certainly intended to increase DHS’s responsiveness and its effectiveness in carrying out its mission, though it appears that only some vUSA data are publicly available.

But these considerations would hardly be useful to the line manager, and they’d be useful to the agency’s senior managers mostly as checkboxes or brownie points when big Kahunas from OMB or the White House came to call.

Conclusions

Of course, if we had picked other Open Government efforts, we would have identified different measures, but there are some general lessons for the problem of developing Open Government metrics.

Get your hands dirty

Reviewing an actual program, rather than “Open Government” in the abstract, makes it easier to get a handle on what we might measure.

Decision requirements drive measurement needs

The line manager, about to decide whether to reach out first to EPA or MMS in expanding vUSA’s Federal footprint, will be eager to know how much back channels have been used to bring these two agencies into the oil spill cleanup. The GIS guru will want to know whether there’s something about mapping ocean currents that can’t be handled by vUSA’s existing standards.

Different decision-makers require different metrics

In contrast, DHS senior manager better not get lost in the weeds of GIS interoperability, but ought to be ever alert for signs that the whole vUSA effort misses the point.

In other words, when someone asks “what’s the right way to measure the success of this open government effort?”, the appropriate answer is “who wants to know?”.

Seek out natural experiments

Even with great measures, Open Government champions will always be confronted by the challenge of demonstrating that their project truly caused the successful result. A “natural experiment”, if you can find one, will go a long way towards addressing that challenge.


Grow your own metrics for online engagement

Posted: May 3rd, 2010 | Author: | Filed under: Civic engagement, Metrics | 3 Comments »

Start with “how do I know it’s working?”, not “what can I count?”

E-democracy.org has 16 years of experience in creating and hosting online civic forums via email and the web.

I participated in an email thread there recently that began with this question:
“How would you measure engagement on public issues via interaction in online spaces?”

It led to a lively exchange, but it left me unsatisfied. “Measure” and “metrics” create a kind of tunnel vision, focusing attention on what’s easy to count on the web (hits, number of posts per day, number of posters, pageviews, unique visitors), and away from our understanding and experience of online forums.

As it happens, Steve Clift, the founder of e-democracy, recently reported the results of a grant to create four online forums centered on a number of towns in rural Minnesota.

The report discussed a number of ways that forum discussions had affected their communities:

  • A discussion about new regulations regarding the handling of household wastewater led the county’s director of planning to reconsider regulatory language.
  • Discussions in a second forum generated stories in the local newspaper.
  • Participants used a third forum to get advice on how to fight the city’s withdrawal of their permit to raise chickens in their backyard.
  • Participants in another e-democracy forum, not covered in this report, used it to organize their response, including meetings with city officials, to a mugging near a transit stop. The transit agency’s community outreach staffer joined the forum, and then the discussion, based on their actions. The president of the local neighbors group participated as well.
  • The report also noted that local government websites had linked to some of the forums and, in one case, the local government had sponsored the start-up of the forum.

These stories suggest a variety of measures that could be applied to e-democracy forums:

  1. How many local government officials are forum members? What percentage of all local government officials are members?
  2. How many of these post, and how often do they post? How many of these posts reflect concrete changes in behavior (meetings scheduled, agenda items added or changed for official meetings, changes to legislation or regulation)?
  3. How many discussions have been used to organize meetings in the community or with government officials?
  4. How many discussions have received links from local or regional newspaper websites?

These measures all need development, and we could likely find booby traps in each. But consider the conclusion of two (hypothetical) reports on community impact of a (hypothetical) forum in Smallville:

Based on web metrics
The forum received 500 pageviews from 200 unique visitors per month.
It had a membership of 128 at the end of the year.
The average length of a visit was 3.5 minutes.
The average visit includes 4 pageviews.

Based on “grow your own” measures drawn from these stories
Two of the five members of the city council became members of the Issues Forum. One joined after a discussion of the city’s response to last winter’s monstrous snowfalls erupted in the forum and led to a delegation of forum members testifying before the council about ineffective snow plowing.

One councilperson posts at least once a week. Three times over the course of the past year, she has responded to questions in the forum or asked for further information. She also introduced an amendment to a local zoning ordinance based on concerns raised in the forum.

A dozen forum members from Smallville South used the forum to organize a meeting with the transportation department to discuss the pothole problem on local streets. They are using the forum to follow-up on the meeting, and an official from the transportation department posts updates on actions taken relating to potholes at least once a month.

The Smallville Gazette has used the forum to solicit feedback on its coverage of the city council.

On average, one out of four of its weekly online issues include a link to one or more Forum discussions.

Which would be more likely to persuade you that Smallville’s online forum had actively engaged the community?