Describe now, define later – a better way to understand Life 2.0

Posted: September 18th, 2010 | Author: | Filed under: Civic engagement, crowdsourcing, Open Government | 1 Comment »

When I venture into new arenas – social media, crowdsourcing, online engagement –  I’m fed by discussions with people who are doing this work. But, too often, the conversation gets bogged down by arguments about definitions.

For instance, in a chat about crowdsourcing recently, someone offered online commenting on government regulations as an example, then someone else – it might have been me – almost derailed the conversation by asking whether commenting really counted as “crowdsourcing”.

The problem: the person who’s just run a rule-making process that received thousands of comments knows what she did, how participants responded, what seemed to work, what fell short. But she – and all the rest of us – are clueless about whether this really is crowdsourcing.

In general: when we’re describing an experience or process we’ve experienced, we know what we’re talking about. But when we debate whether this experience is an example of crowdsourcing, we don’t.

Bold claim on my part, I know. And, one day, we’ll be able to agree, quickly, on whether my friend’s process was crowdsourcing, or collective intelligence, or prediction markets, crowd-storming, or peer production or something else entirely.

Why? We’ll, collectively, have more experience, and we’ll have come to (some) agreement on who the authorities are, and there’ll be some benefit to the definitions.

But today we’re still groping, learning what’s been done, identifying new combinations that haven’t yet been tried but look promising.

In effect, we’re crowdsourcing the definition of  “crowdsourcing”

(And if you’re thinking that this advice applies to discussion about “social media”, “Gov2.0″, and “online engagement” as well, you’ve got my point exactly.)


Invest failure wisely to generate insights for better government

Posted: August 10th, 2010 | Author: | Filed under: Open Government | 1 Comment »

In brief: Government is continually confronted by challenges that can only be met by discovery, since existing knowledge is inadequate. Discovery relies, in part, on experimentation. Fruitful experimentation relies on a tolerance for significant failures .

Last week, Lovisa Williams wrote a wellreceived post, “Failure is not an option”, arguing that

Most civil servants … recognize we are in positions of public trust…. Therefore, we have developed a culture where failure is not considered an option. If we fail, then there could be serious consequences….

In order for Government to successfully evolve to the next generation of government, … we need to ensure we have established a means where we can continue to feed the evolution….

We do have things that don’t work as expected and absolutely fail, but we don’t talk about these things even within our own agencies. We are also missing the potential for us to start exploring other paths or opportunities earlier.

She’s onto something very important.

Government action takes place in many different contexts. When the problems are familiar, we have well-known solutions that produce predictable and satisfactory results, and the consequences of failure are high, it makes sense to go for predictable results and sanction failure.

But today we face challenges that are unprecedented, even baffling, some consequential, even earth-shattering – such as global warming, deep sea oil spills, deflation, Al Qaeda and other deadly yet ghostly foes – some merely puzzling – such as social media, generational changes, and new political constellations. And we don’t yet have satisfactory solutions.

So, failure, to some degree, is inevitable. I’d argue, with Lovisa, against the massive, glacially slow failures that, if they teach us at all, provide too little new information, too late for us to change course.

Instead, I’d argue for, well-designed experiments, where we recognize what we don’t know and invest effort (and to some degree, failure) to generate insights that arrive early enough to make a difference. Instead of “spending” failure slowly, covertly, and massively, let’s invest it openly, quickly, carefully, and in small amounts, to create new methods, opportunities, and success.

Notes:

    Eric Ries’s podcast on Lean Startups: Doing More with Less shows these ideas at work in business and entrepreneurship.

  • @digiphile’s tweet mashing up comment’s on Lovisa’s post with @marcidale’s notion of #AgileGov was helpful in shifting my thinking in this direction.
  • Then, Peter Norvig’s observation that “If you do experiments and you’re always right, then you aren’t getting enough information out of those experiments” put it all together. (Thanks to lesswrong.com for the pointer. )


Brownie points, or results?

Posted: May 20th, 2010 | Author: | Filed under: Metrics, Open Government | Tags: | 1 Comment »

Using the Gulf oil spill to get clear about measuring Open Government

Measure “Open Government”? Yes, but …

I think that the success of the Obama Administration’s Open Government effort is critical, but I’m put off, even bored, by the measurement discussions to date. Imagine that you’ve got good reason to believe that your nephew is the next Jackson Pollock, your niece the next Maya Lin, and then the first report back from their studios is” “he’s painted more than 500 square feet of canvas! she’s created two and a half tons of sculpture!” and you’ll know how I feel.

It’s as if someone brought a speedometer to a sunset.

In December, Beth Noveck, the leader of the Administration’s Open Government efforts, wrote that measures of Open Government would be useful as a way to hold the Administration’s “feet to the fire” and to ensure that proposed changes were implemented. She suggested a variety of measures, including:

  • The year to year percentage change of information published online in open, machine-readable formats;
  • The number of FOIA requests processed and percentage change in backlog
  • The creation of “data platforms” for sharing data across government
  • The successful posting of data that increases accountability and responsiveness, public knowledge of agency operations, mission effectiveness, or economic opportunity

(I’ve left the link in, but alas the page has disappeared.)

To be fair, it’s a tough problem. As the “Measuring Success” group from one of the Open Government Directive workshops noted, seemingly reasonable measures may be difficult to interpret, for instance: the time spent on a website might signal popular use … or that users were confused.

So let’s start again, this time from the bottom, up: if you were managing an Open Government effort, what would you want to measure? For instance…

Virtual USA

In Feb, 2009 , Homeland Security rolled out Virtual USA (vUSA) for the sharing of geospatial data between emergency response agencies, , “a national system of systems … so that disparate systems can communicate with each other”.  It will allow responders in multiple locations to coordinate their efforts with a common set of images and thereby reduce confusion and shift at least some activity away from phone calls. It is a bottoms-up collaboration between DHS, first responder groups, and eight southeastern states. The system is dependent in part on states and localities to provide data, and is locally controlled: The agency providing the data owns it, controls how and when and with whom it is shared, and can use its existing software to do so.

vUSA seems impressive:

Two more pilots are starting, covering eleven more states. And the user community at Firstreponder.gov has about 150 members.

The nearest level of management

Continuing with our exercise, imagine that you’re in charge of vUSA. You face decisions about which additional GIS standards and technologies to incorporate, how to divide resources between technology vs. additional outreach or training for participating states, and whether to reach out to additional Federal agencies, for instance, the Minerals Management Service, which had primary regulatory authority over the BP oil well.

To guide these decisions, you’d ask your staff these quantitative questions:
  • How much staff time in participating states has shifted from coordination via telephone to coordination via vUSA?
  • For what issues and data needs are participating states still using the phone?

and these qualitative ones:

  • What would have changed in the oil spill response if vUSA didn’t exist?
  • How does adoption and involvement differ between various agencies in the participating states and the various components of each agency?
  • Are response sites still using fax, couriers, or other workarounds to share information?

Big picture managers

Now zoom out a bit: imagine that you’re a senior manager at the Department of Homeland Security (DHS), with ultimate responsibility for vUSA but also many other programs.

Given your agency’s recent history with Katrina on the Gulf Coast, among other things, you’ll monitor how smoothly local, state, regional, and federal actors work together in dealing with emergencies and wonder whether staff increases (e.g. for liaison officers), training, or incentives would be more likely than technology (such as vUSA) to improve coordination. And you’d consider whether coordination should be addressed more broadly than geospatial information sharing, for instance to include the development of shared goals among the coordinating agencies or agreement on division of roles and responsibilities.

You’d ask the questions we’ve already considered, but you’ve got a broader range of responsibilities. The vUSA manager’s career will live or die by the success of that effort, but you’re worried about DHS’s success in general. Maybe there are better ideas and more worthwhile efforts than vUSA.

To assess this, you’d ask your staff to research these issues:

  • how eager are other states are to join the vUSA effort? (So the two additional pilots would be a good sign.)
  • How has vUSA affected the formulation of shared goals for the oil spill clean-up effort?
  • Is each agency involved playing the role that it is best suited for in the clean-up?
  • how has emergency response to the flooding in Tennessee, a participant in vUSA, differed from the response to flooding earlier this year in Minnesota and and North Dakota, states that don’t participate in vUSA?

The last question is an example of a “natural experiment”, a situation arising out of current events that allows you to compare crisis management and response assisted by vUSA vs. crisis management and response handled without vUSA, almost as well as you could with a controlled experiment.

You’d also have some quantitative questions for your staff, for instance: how have the FEMA regions participating in vUSA performed on FEMA’s overall FY 2009 Baseline Metrics from the agency’s Strategic Plan?

And back to “measuring open government”

Note how much more compelling these “close to the ground” measures are than the generic “Open Government” metrics. If you were told, this morning, that a seemingly minor vUSA glitch had forced the oil spill command center to put in extra phone lines, no one would have to interpret that measure for you: you’d already know what you’re going to focus on today. And if, as a senior manager, you had a report in front of you demonstrating that none of the dozen hiccups in the response to North Dakota’s flooding were repeated in the vUSA-assisted response to the Tennessee disaster, you might actually look forward to a Congressional hearing.

Two of the Open Government measures are relevant:

  1. vUSA is a new platform for sharing data across government.
  2. It’s certainly intended to increase DHS’s responsiveness and its effectiveness in carrying out its mission, though it appears that only some vUSA data are publicly available.

But these considerations would hardly be useful to the line manager, and they’d be useful to the agency’s senior managers mostly as checkboxes or brownie points when big Kahunas from OMB or the White House came to call.

Conclusions

Of course, if we had picked other Open Government efforts, we would have identified different measures, but there are some general lessons for the problem of developing Open Government metrics.

Get your hands dirty

Reviewing an actual program, rather than “Open Government” in the abstract, makes it easier to get a handle on what we might measure.

Decision requirements drive measurement needs

The line manager, about to decide whether to reach out first to EPA or MMS in expanding vUSA’s Federal footprint, will be eager to know how much back channels have been used to bring these two agencies into the oil spill cleanup. The GIS guru will want to know whether there’s something about mapping ocean currents that can’t be handled by vUSA’s existing standards.

Different decision-makers require different metrics

In contrast, DHS senior manager better not get lost in the weeds of GIS interoperability, but ought to be ever alert for signs that the whole vUSA effort misses the point.

In other words, when someone asks “what’s the right way to measure the success of this open government effort?”, the appropriate answer is “who wants to know?”.

Seek out natural experiments

Even with great measures, Open Government champions will always be confronted by the challenge of demonstrating that their project truly caused the successful result. A “natural experiment”, if you can find one, will go a long way towards addressing that challenge.


Grow your own metrics for online engagement

Posted: May 3rd, 2010 | Author: | Filed under: Civic engagement, Metrics | 3 Comments »

Start with “how do I know it’s working?”, not “what can I count?”

E-democracy.org has 16 years of experience in creating and hosting online civic forums via email and the web.

I participated in an email thread there recently that began with this question:
“How would you measure engagement on public issues via interaction in online spaces?”

It led to a lively exchange, but it left me unsatisfied. “Measure” and “metrics” create a kind of tunnel vision, focusing attention on what’s easy to count on the web (hits, number of posts per day, number of posters, pageviews, unique visitors), and away from our understanding and experience of online forums.

As it happens, Steve Clift, the founder of e-democracy, recently reported the results of a grant to create four online forums centered on a number of towns in rural Minnesota.

The report discussed a number of ways that forum discussions had affected their communities:

  • A discussion about new regulations regarding the handling of household wastewater led the county’s director of planning to reconsider regulatory language.
  • Discussions in a second forum generated stories in the local newspaper.
  • Participants used a third forum to get advice on how to fight the city’s withdrawal of their permit to raise chickens in their backyard.
  • Participants in another e-democracy forum, not covered in this report, used it to organize their response, including meetings with city officials, to a mugging near a transit stop. The transit agency’s community outreach staffer joined the forum, and then the discussion, based on their actions. The president of the local neighbors group participated as well.
  • The report also noted that local government websites had linked to some of the forums and, in one case, the local government had sponsored the start-up of the forum.

These stories suggest a variety of measures that could be applied to e-democracy forums:

  1. How many local government officials are forum members? What percentage of all local government officials are members?
  2. How many of these post, and how often do they post? How many of these posts reflect concrete changes in behavior (meetings scheduled, agenda items added or changed for official meetings, changes to legislation or regulation)?
  3. How many discussions have been used to organize meetings in the community or with government officials?
  4. How many discussions have received links from local or regional newspaper websites?

These measures all need development, and we could likely find booby traps in each. But consider the conclusion of two (hypothetical) reports on community impact of a (hypothetical) forum in Smallville:

Based on web metrics
The forum received 500 pageviews from 200 unique visitors per month.
It had a membership of 128 at the end of the year.
The average length of a visit was 3.5 minutes.
The average visit includes 4 pageviews.

Based on “grow your own” measures drawn from these stories
Two of the five members of the city council became members of the Issues Forum. One joined after a discussion of the city’s response to last winter’s monstrous snowfalls erupted in the forum and led to a delegation of forum members testifying before the council about ineffective snow plowing.

One councilperson posts at least once a week. Three times over the course of the past year, she has responded to questions in the forum or asked for further information. She also introduced an amendment to a local zoning ordinance based on concerns raised in the forum.

A dozen forum members from Smallville South used the forum to organize a meeting with the transportation department to discuss the pothole problem on local streets. They are using the forum to follow-up on the meeting, and an official from the transportation department posts updates on actions taken relating to potholes at least once a month.

The Smallville Gazette has used the forum to solicit feedback on its coverage of the city council.

On average, one out of four of its weekly online issues include a link to one or more Forum discussions.

Which would be more likely to persuade you that Smallville’s online forum had actively engaged the community?


A better question than “what’s the business case for Gov2.0?”

Posted: October 30th, 2009 | Author: | Filed under: Civic engagement, Government, Social networking, Technology | Tags: | 2 Comments »

Don’t ask “how do I make the business case for Gov2.0?”; tell us about your agency’s relationships, and the case will become obvious.

Imagine that you’re a savvy Federal staffer in US Defense Military Health System Program. You’re excited by what the US Patent Office accomplished with its Peer to Patent program, eager to copy the success of Twestival 2008 which used Twitter to kick off events in 200 cities across the globe and raise $250,000 for water charities, and inspired by OSTP’s open government initiatives. So on Govloop, or on your blog, you pose the question: “what’s the business case for Gov2.0 (or social media or Web2.0 or Twitter)?”

and … crickets – a kiss of death for the conversation. You may get some well-meaning feedback from others who have confronted the same challenge, but the thread runs out pretty quickly. It seems that no on can help you.

What happened? You’ve focused your readers towards technology (Twitter? Youtube? Maybe a wiki?) and generalities (“communication is good”; “crowdsourcing rocks!”), and away from the relationships, mission, history, and other specifics of your organization that would give you and your readers the raw materials to create the business case.

Worse, we’ve all become distracted from your unique role in this conversation: you know the agency’s goals, its current challenges, what keeps senior management awake at night, what appropriations they’re looking forward to, and what headlines they’re dreading. You know that, or ought to. The rest of us don’t.

So, consider, instead, this conversation opener:

I work for the Assistant Secretary of Defense for Health Affairs. As you may remember, one of my boss’s predecessors was featured in the Washington Post’s 2007 expose of poor conditions at Walter Reed Army Medical Center, and we’re still working to rebuild trust with injured soldiers and their families. We’re also trying to strengthen our relationships with medical researchers. And, by the way, we run TRICARE, the military’s HMO, and we’re working hard to keep the program affordable, with low deductibles and co-pays. What Web2.0 tools would be useful to us and how should we measure results?***

Imagine the roaring conversation this would inspire:

  • Your readers from the Centers for Disease Control, the Department of Veterans Affairs, and the Centers for Medicare & Medicaid Services will be typing their suggestions before they’ve finished reading your post.
  • Someone might point you in the direction of OrganizedWisdom — an aggregator of health expertise — and suggest that you to explore how you can engage your customers in becoming guides to military health issues
  • The success of PatientsLikeMe will be mentioned as a model for supporting the wounded warriors you serve.
  • Someone else might point you to Healing in Community Online, a sort of Second Life for patients and their families.

The question of metrics would become easier as well: TRICARE already surveys its beneficiaries regularly to determine how they perceive the accessibility and quality of care (and if you didn’t already know that, rest assured someone would tell you). Surely you could work some questions into that to evaluate your Gov2.0 initiative?

Now, with your help, your readers are brainstorming how your agency can accomplish its mission and deal with its challenges.

So, tell us about your agency’s key relationships, inside and outside government, and the ones that are most troubled, and watch the conversation explode.

(***Background on USDOD – Health Affairs is drawn from NAPA’s description of top “prune” jobs in the Federal Government. It might be an eye-opening exercise to review the other positions listed and brainstorm how Gov2.0 could, specifically, help each of these appointees.)

(Cross-posted from Govloop.)


Lessig: control transparency? No way!

Posted: October 16th, 2009 | Author: | Filed under: Ethics, Government, Transparency | Tags: , | 2 Comments »

I doubt that Lawrence Lessig or his editors meant to set a trap when they titled his article “Against Transparency”. Nonetheless, they’ve bagged some prominent bloggers, who all assume that the piece argues that transparency can and should be controlled:

  • “Larry supplies a very cogent argument against the disclosure of too much data from Congressional members.” (Brian Drake)
  • “Naked transparency, the thinking goes, is really a push for Congress to talk its coat off…. In other words, an extremist position has the effect of simply tugging the political center closer towards openness” (Nancy Scola)
  • “He’s against transparency as the sole requirement for political reform, and he’s against the transparent dumping of data without tools for making sense of it.” (David Weinberger)

Drake, Scola, and Weinberger make broader arguments that are well worth reading, but in these sections, they miss Lessig’s point. As I read him, arguing for restrictions on transparency make as little sense as arguing that hurricanes and earthquakes should spare larger cities. And”naked transparency” is no mere rhetorical ploy, just as a tsunami warning is a rather more than suggestion that sunbathers move their towels a few feet up the shore.

(Weinberger has written at least three posts in response to Lessig. In his first piece, a careful walkthrough, he catches Lessig’s point precisely, when he writes “We can’t fight the Net’s lessening of control over info…. We need solutions that accept the Net’s effect.” In a second piece, Weinberger both gets and misses this point . He’s correct in noting that, in warning of the catastrophe of cynicism transparency may create, Lessig is no Internet triumphalist. But, contra Weinberger, Lessig is in a sense a determinist when he argues that transparency can’t be stopped.)

Lessig’s point is that transparency is inexorable. Just as newspapers have failed to restrict Google News and Craigslist, and as the recording industry has largely failed in its legal and technological efforts to restrict file-sharing, efforts to restrict transparency will fail as well:

In all these cases, the response to the problem is to attack the source of the problem: the freedom secured by the network. In all these cases, the response presumes that we can return to a world where the network did not disable control.

But the network is not going away.

He has serious concerns about transparency, but he spends not a word arguing that it should be restricted. Instead, he seeks

a solution … that accepts that transparency is here to stay–indeed, that it will become ever more lasting and ever more clear–but that avoids the harms that transparency creates…

More about this later, but puh-leeze note that Lessig’s point is that transparency is inexorable, not that it can be restriced.


Lessig: “Anti-corruption transparency” is inexorable; cynicism is not.

Posted: October 13th, 2009 | Author: | Filed under: Civic engagement, Ethics, Transparency | Tags: , | 1 Comment »

Lawrence Lessig’s recent “Against Transparency: The perils of openness in government” in the New Republic, has generated controversy and comment, in part because it has been misread as an argument that information on political and financial influence should be restricted rather than released.

This post summarizes the parts of his article that pertain to the current government transparency movement and his cautions, criticisms, and remedies. Later posts will summarize some reactions from Carl Malamud, Ellen Miller of the Sunlight Foundation, and others, and then offer my take.

Focus: “Anti-corruption transparency”, which …

Lessig is concerned with one one specific kind of transparency: projects intended to reveal improper influence or outright corruption, e.g. by correlating incentives provided to legislators, such as campaign contributions, with the their votes.
Transparency is good when information is provided in usable form to those who are able to use it wisely, e.g. the miles per gallon ratings which guide individuals as they buy automobiles.   Consumers can benefit from “mpg transparency” without much effort.

… misleads us because we don’t pay enough attention.

In contrast, benefitting from what one might term “anti-corruption transparency” requires a long attention span.
But people, often quite rationally, limit the attention they devote to interpreting information. Indeed, politics has become the art of exploiting short attention spans, “tagging your opponent with barbs that no one has time to understand, let alone analyze”.

So, in too many cases, we jump to the conclusion that money has bought a vote…

Unfortunately, what others call “connecting the dots” is the path of least resistance.. For instance, a journalist or an opponent highlights that First Lady Clinton opposed the Bankruptcy Bill in 2000, that soon to be Senator Clinton received $140,000 in campaign contributions from credit card and financial services firms later that year, and that she voted for the bill in 2001.  Although there are many reasons why she might have switched her position, “[e]veryone … ‘knows’ just why she switched, don’t they?” We connect the dots.
Lessig argues, in contrast, that “[a]ll the data in the world will not tell us whether a particular contribution bent a result by securing a vote … that otherwise would not have occurred.”
Anti-corruption transparency is problematic because of “its structural insinuations–its constant suggestions of a sin that is present sometimes but not always”.

… and become cynical about government, and that’s bad.

It is already the “opinion of the vast majority of the American people, [that Congress] sells its results to the highest bidder”, and anti-corruption transparency, “unqualified or unrestrained by other considerations”, will just make this worse. He wants to to reduce this cynicism.

But this transparency can’t be stopped.

But we will not find the answer to our problems in a Luddite counterattack: “The network is not going away…. [T]ransparency is here to stay–indeed, … it will become ever more lasting and ever more clear….”

Instead, let’s restructure campaign finance.

Instead, the solution is “[a] system of publicly funded elections [which] would make it impossible to suggest that the reason some member of Congress voted the way he voted was because of money…”. “Qualified candidates” would be provided with a publicly funded “grubstake”, then permitted to raise additional funds from citizens up to a low cap. $100 per person per cycle.
This will increase people’s trust in Congress “by establishing a system in which no one could believe that money was buying results”.

Update:

See also these noteworthy reactions from around the web:


CAP’s great Twitter 101 – and ways to make it even better

Posted: September 25th, 2009 | Author: | Filed under: Civic engagement, Social networking | Tags: | No Comments »

What was good.

Returned earlier today from a useful Twitter 101 session hosted by CAP’s Alan Rosenblatt at the Internet Advocacy Roundtable.  It was great. The next one could be even better, if we could learn more about what the presenters knew in their bones (see bottom).

Like many discussions on how to get started with social media, the conversation bounced around.

Tech: twazzup,  übertwitter for blackberry , and hootsuite were new to me and look interesting. Much more on Twtter resources here, courtesy of Shaun Dakin.

Stories: AAUW is drawn out of silent lurkerdom when they respond to a tweet from a disappointed soon to be ex-member who has misinterpreted a local chapter’s action; conversation results, the member is mollified and AAUW managers see the value. Dakin’s carefully nurtured network of robocall sleuths identifies the first (known) robo-sex-call one night, and the next day, the news hits the Rachel Maddow show.(I realized Dakin is, in effect, the real-time web’s ombudsperson for robocalls. )

Tips: Twitter is a tool, not a strategy. When you start, decide what your voice is going to be. Keep your twitter stream focused  – eclectic is ok, but beware that if you veer from months of all business to throwing in your sports enthusiasms, you’ll lose followers. (Via @epolitics)  Why would you want to hear only from people who agree with you? (Via @digitalsista)

Get senior manager’s buy-in by getting him/her on the rostrum for a new media conference, and let the infectious energy work its magic (Via @GloPan)

Conservatives tend to cluster around a few hashtags, e.g. #tcot, while progressives tend to use specific hashtags for specific issues. (This seems important, perhaps because it demonstrates degree of focus.)  (Via  @digitalsista)

Effectiveness requires listening, which amounts to research, and it’s hard, time consuming work.  (Via @henrim)

What would have made today’s session even better?

One of the presenters crystallized this for me when he insisted that the social – non-technical – aspects of using twitter could only be discovered in practice, not taught, and that it was more art than science.  But there are more than a few art schools, and though you can’t teach inspiration, you can teach craft.

I suspect that today’s presenters (and more than a few audience members) knew in their bones more than they could say about how to do it well. These questions might have helped:

  • How do you insert yourself in conversations and get heard?
  • What are your rules of thumb for getting started on a new campaign?
  • When you “listen” to Twitter, how do you do it, what do you listen for, and when and how do you respond?
  • When you’ve “fallen off the horse” in your use of Twitter, how do you get back on?

I’m sure that the presenters did their best to tell us all they knew how to say, but I doubt they told us all they knew how to do. I’m hungry for more.


Realize that the Recovery.gov IT National Dialogue is not Digg, and take it to the next level

Posted: May 1st, 2009 | Author: | Filed under: Civic engagement | Tags: | No Comments »

The Recovery Accountability And Transparency Board (RATB) is sponsoring a public, web-based dialogue on promising IT to support transparency and accountability of the Administration’s Recovery Act spending. It is groundbreaking. However, seeming parallels with Digg, Slashdot, and other social media sites are misleading. Indeed, they obscure steps that could still be taken to make this effort, and future efforts, highly effective models of citizen engagement and transparency. Clay Shirky is a wise observer of the rise of easy online collaboration processes for large groups. His work provides us with a framework to clarify the ways in which the Recovery.gov effort is fundamentally unlike many more familiar social networks and to suggest tweaks that would help it realize its unique potential.

Shirky’s Promise, Tool, Bargain

In Here Comes Everybody, Clay Shirky argues that successful web-based coordination communities meet three challenges:

  1. A plausible promise – not too mundane, not too sweeping – that persuades would be participants to join the group
  2. A useful tool that supports the desired coordination, and
  3. A bargain that develops through interaction and over time, often implicitly, which specifies what participants can expect and what is expected of them

For instance, Delicious.com’s promise is that it provides personal value – storing your bookmarks and making them accessible from anywhere – from the get go. The tagging component of Flickr provides a tool that makes it easier for members to connect with other participants who have posted similar photos, famously, of the Coney Island mermaid parade. And the bargain for Flickr’s “Black and White Maniacs” group requires that participants who have posted a photo immediately comment on at least two other photos, in order to keep an interaction going.

How does the National Dialogue website fare on Shirky’s criteria?

Promise: the good stuff is vague

The introduction bills the Dialogue as an opportunity to help the Administration keep its commitment to make Recovery spending transparent and accountable:

Your ideas can directly impact how Recovery.gov operates and ensure that
our economic recovery is the most transparent and accountable in history….

Participants can refine these ideas in open discussion, and vote the best ones to the top.

The call for participation email message from 4/23 notes

The results of the dialogue will be reviewed for the most innovative suggestions around making Recovery.gov a more effective portal for transparency.

The “about” page makes a commitment:

Upon the close of this dialogue on May 3rd, 2009, the President’s Recovery Accountability and Transparency Board will review the results of this discussion.

The promise is vague, but might be glossed as “you can put your proposal in front of us – the government – and we will review it carefully”.

Unlike the first promise of Delicious, thenationaldialogue.org does not promise to serve the participant in a direct or tangible way, nor to connect him or her with other participants.
Further, the central part of the promise – “we will review it carefully” – in fact happens mostly outside of the tool, indeed, out of sight.

(More on apparent listening/reviews already underway.]

Tool: the payoff happens offline and out of sight

The website seems adequate for the first part of the promise – participants can submit proposals quite easily and there is a tutorial as well. There is little to go on to determine how well the tool serves the review process. Since proposals can be sorted by average user rating and number of comments and the invitation states that participants can vote the best ideas to the top, we can infer that these criteria will be used to select the proposals for review. But, again, it’s vague – the top 10 ideas? The top 10%?

An even bigger question is whether voting and commenting by fellow participants are appropriate features, given the promise and purpose. Digg and Slashdot are misleading models for thenationaldialogue.org: they support lateral communication between participants. For Digg or Slashdot, the reading audience is also the voting audience.

Thenationaldialogue.org supports, instead, vertical and asymmetric communication – from participants up to RATB IT staff. These ultimate “idea consumers” are as far as we know not the voters or commentators on the site. Thus, it’s plausible and even reasonable that Federal IT staff will evaluate and adopt ideas with low ratings or few comments. So, how will participant ratings of ideas be helpful to them?

Further, one could imagine situations where voting is actively counterproductive – if a small company or one person firm proposes an idea that is feasible and valuable but contrary to the interests of a large IT company whose employees are participating in force on the site, the behemoth could easily and conclusively vote down the dangerous (to them) idea. It is to the credit of the participants that this doesn’t appear to be happening, but it does raise the question of why voting is a feature on this site.

(In a future post, I’ll examine ways in which the RATB could create future events that explicitly supported participant to participant interaction as an appropriate part of the promise, tool, and bargain, but for this post I’ll focus solely on the Dialogue as an event for suggesting IT ideas for Federal review and adoption.)

Bargain: “Wham, bam, thank you, citizens” is not the way to go

The core of the Shirky’s notion of bargain is that it evolves over time and that it is as much or more a matter of participants’ understanding, assumptions, and expectations as it is of any “fine print” or “terms of use”. Kevin Rose of Digg discovered in 2007 this when his users revolted against his efforts to complete with legal demands from MPAA to remove information from Digg that could be used to crack HD DVD encryption. Digg users’ expectation was that they controlled what was voted up and Rose quickly realized that his community would disintegrate unless he bowed to their wishes.

With only one week allotted for the current discussion, thenationaldialogue.org is not yet in a position to benefit from an evolving bargain – there’s no time for it to develop.

Inches from greatness: Suggested improvements

I’ve worked with the Federal Government, notably on an early web-conference in support of then Vice President Gore’s Reinventing Government initiative – similar in some respects to this effort – , and I’m fully aware that the perfect can be the enemy of the good.

It is remarkable that this site exists, and I think it provides a great foundation for future efforts. It also makes sense to view this event in a broader context and consider additions and changes that build relationships not only for this event, but also for similar events in the future. So I’d like to focus on where RATB could take it from here.

Clarify and bolster the promise of careful review

RATB should recognize that some of the lessons from Digg, Slashdot, and similar social media sites do not apply, to the extent that this site is for asymmetrical communication between developers and idea-mongers on one side and Federal IT staff on the other and tune the explicit promise with this in mind.

RATB should clarify whether each idea will be reviewed and, if not, how comments and ratings will be used to prioritize ideas for review, and announce this clarified promise on the site and in email in the coming days.

At this point, it seems likely that the total number of ideas will come to less than 600.
It would not be unreasonable for participants to expect that each idea will get at least one thoughtful comment. In any case, RATB should be explicit, transparent even, about this.

Align the tool with the promise – make the review transparent

To fulfill the promise of careful review for ideas, RATB could require that its IT reviewers use this site for comments and votes on the ideas, rather than doing the review offline and out of view. Comments and votes could be anonymous, if necessary. But thoughtful feedback, on the substance of the ideas, their feasibility in the ARRA context, and on the way participants presented them, could be a huge win for participants. And it would be a tangible fulfillment of this site’s promise.

For future events, RATB and others in the Administration should consider whether voting and rating is appropriate, given the differences in social context between Digg and these events.

Build the bargain for the future: there will be more dialogues

Shirky reminds us that the bargain develops, organically and implicitly, over time.

If you look carefully, you’ll see that the content of the earlier Health IT dialogue from October 2008 is still present on www.thenationaldialogue.org. From what I can tell, the profiles and userids of the previous event are entirely disjunct with this event.

I’d suggest that future dialogues break the precedent of discontinuity and, instead, build explicitly from this event. RATB should invite current participants to continue to follow the development of recovery.org via a specific feed (email, twitter, blog). People arriving in a week or a month or a year should of course also be invited to join, but current participants should be treated, welcomed, and celebrated as “early adopters” and pioneers.

In addition to using the site to present Federal IT staff comments and ratings, it could also be used for new ideas, initiated either by RATB or by ordinary participants. The need for new ideas and the inevitable generation of new ideas surely won’t stop on May 3rd.

Keeping the site “hot” would jumpstart subsequent dialogues and build a base of participants who are wise both in the use of the tool itself, and in the issues and constraints involved in Federal IT issues.

RATB might also draw on its interagency relationships to bring promising ideas to the attention of IT staff in other Departments and Agencies. Minimally, it could send email showing other IT managers how to use the tags and the search engine for a quick review of ideas that may be of interest to them. (Imagine a headline highlighting a small business that used this Dialogue to grow its relationship not only with RATB but with another Federal agency, with great benefits to transparency and efficiency.)


Footnote:
More on Recovery.gov listening efforts [back]
It is too soon to tell whether the promise will be fulfilled, but two things suggest that some amount of review is already happening:

First, as of 3pm ET on Friday afternoon, Google reveals that 13 of the roughly 400 ideas have received comments from participants who are designated “dialogue catalysts”, notably one person from the Recovery Accountability and Transparency Board. A tweet from @Natldialogue describes the catalysts’ role as trying “to ask focusing [questions and ] add detail [to] discussions; they promote further exploration w/o a particular POV”. A review of the catalysts’ comments suggests that they are meeting their goal, typically encouraging the author of the idea and asking in specific ways for more information. But why for only 3% of the ideas?

Second, mass email from the organizers to the participants on the morning of the fifth day noted:

The Dialogue has brought forth lively discussion on how to make Recovery.gov a place where the public can monitor the expenditure and use of recovery funds. The growing number of users and ideas posted on the site in just a few days illustrate how interested the IT community is in impacting the operation of Recovery.gov….

Now with three days left in this week-long Recovery Dialogue, we are receiving some interesting and thoughtful submissions. However, there are a few key concepts around which we need your ideas and approaches.

This could be read as a direct reaction to the ideas posted, but given its vagueness, it’s equally plausible that this email was drafted before the Dialogue began.