Public engagement: Knowing what we need to measure

Posted: March 16th, 2015 | Author: | Filed under: Civic engagement, Framework, Metrics | 3 Comments »

What decisions do we face?

Now that we’ve sketched out what one type of public engagement does and how it does it, we can get a better handle on the kinds of decisions that will arise as we manage a public engagement process.

(Cockpit photo byAleksander Markin) Dials and indicators are useful if ...

Image: Aleksander Markin
Dials and indicators are useful if …

In the middle of a public engagement project, the most basic decision a manager faces is “are we done yet?”.

In considering projects in retrospect, for instance if we’re considering what methods or consultants to use for an upcoming assignment, the basic question might be “was the project successful?”.

These big questions break down into lots of little questions, as we can see in our bridge example.

Is more engagement work required to create the needed level of long term support across stakeholders? Can we count on the bridge’s neighbors to see the project out, in spite of the disruptions we expect as a result of construction? What about unrealistic expectations: Have the overly rosy hopes for rush hour traffic reductions been corrected? And what about perceived unfairness: Are the city’s taxpayers likely to continue to fund bridge maintenance though the bridge benefits primarily commuters?

Questions like these could be addressed by polling, surveys, and interviews of the relevant stakeholders.

Of course, since the goal is long-term stakeholder support, the proof of the pudding is whether –  five, ten, and twenty years hence – the support is there. Research centers and foundations that study public engagement should revisit past projects to determine how well stakeholder support was sustained.

Component Processes

As we manage each of the five component processes as the public engagement effort proceeds, we are continually making one important decision:

Have we done enough at each particular stage to allow succeeding stages to be successful?

This is a broad topic, but I’ll illustrate the approach with questions that could, in turn, drive metrics.

OUTREACH: Are we reaching cyclists as well as commuters, low income as well as middle income residents? Does the sample group pulled into the engagement process match the larger stakeholder population in key characteristics? As this larger population changes over time, can we pull the right kinds of new members into our sample group to stay in synch? Have we pulled in enough participants for subsequent processes to succeed, e.g. for a survey to be statistically reliable?

These questions can be answered by demographic surveys of participants, compared to polls of the underlying population of stakeholders.

SOCIAL SURVEY: Can we use our survey of commuters, our interviews with cyclists, our polls of taxpayers to design relevant education efforts and anticipate the key issues in the negotiation phase? Are we assessing all stakeholder groups in a reliable way? Are we following the relevant best practices from statistics, ethnography, and so forth?

EDUCATION, INFORMATION, PUBLIC RELATIONS: Do our presentations to automobile association members in fact bring commuters up to speed about the different needs of cyclists? Do users engage with our website in enough depth to understand the uncertainties in the bridge construction project? Do the bridge’s neighbors have a clear sense of how construction will affect them?

NEGOTIATION: Is the negotiation phase structured to address the concerns we’ve uncovered in the social survey phase? Once the negotiation phase has concluded, are taxpayers ready to support the bridge? Are cyclists comfortable that they’ll be able to use the bridge safely? If we’ve added a park project to compensate the bridge’s neighbors for the construction impact, does the neighborhood understand and accept the relationship between the park and the main bridge project?

OPENING UP: Once we’ve opened up the process, are taxpayers who weren’t directly involved in the previous four phases as supportive of the bridge project as taxpayers who participated in the negotiation? Are the bicycle activists we didn’t reach with the initial public education campaign comfortable that they too will be able to use the bridge safely? Are the stakeholders who participated in the public engagement process directly and those who learned of the process through our dissemination similar in their degree of project understanding and support?

Metrics, Decision-Making, and an Orientation to Results

they get us where we're going.

… they get us where we’re going.

This post demonstrates the value of the view of public engagement laid out in the previous two posts. By fitting public engagement into the larger picture of public infrastructure projects, we have a context for considering the kinds of decisions that will need to be made, and thus what kinds of questions and metrics will be useful.


“Who?” first: planning successful social media strategies

Posted: May 8th, 2012 | Author: | Filed under: Design, Metrics, Social networking | 3 Comments »

Not “Mac or Windows?”; rather “who’s going to help?”

I used to dread when friends or family asked: “Chris, what sort of computer should I buy – laptop or desktop, Macintosh or Windows?” The people who needed me most were the hardest to help – they didn’t know how they’d use a computer, weren’t settled on much of a budget, had very high expectations, but little sense of what it would take for them to put a computer to good use.

But turning the question back on them (“how will you use the computer? do you need to hook it up to a printer or a PDA? how much do you want to spend?”) led to embarassment, not insight.

Finally, I realized that people who asked me a question like this would rely on a friend or family member for troubleshooting, when things went wrong. It was surprising that most of my questioners knew right away, when asked, who that techie would be. Of course, if there were problems, they’d call on Cousin Amy, or Joe from church, or that nice man down at the High Tech Depot.

Bingo.

The next time I got the question I responded with a question that was useful, not annoying: “who’s going to help you with your new computer, and what systems do they know?” A light went on in my friend’s eyes: they knew what to do.

The moral: Those first questions about technology are almost always, really, questions about people.

Thus, for organizations, not “Twitter or Facebook?”; rather “who’s going to make this work?”

Now, if you’re an organization seeking social media strategy and technology advice you’ll need a little more help.

Josh Bernoff and Forrester to the rescue, with a shiny (and useful) acronym: POST.

Consider, in order:

  1. People: “[K]now the capabilities of your audience.”
  2. Objective: “Decide on your objective…. Then figure out how you will measure it.”
  3. Strategy: What processes “will be different after you’re done?”
  4. Technology: Twitter? Wiki? Facebook? Blog? etc. “Once you know your people, objectives, and strategy, then you can decide with confidence.”

For non-profits and government agencies, I’d widen the circle of People: your staff, your management, your donors, and your partners are important, too. Who will need to participate in this new strategy to make it a success? What’s their training? What are they capable and motivated to learn?

And I’d start the search for Objectives and – more importantly- measures by writing the stories you’d like to tell when the initiative is a success.

Say you’re putting a state legislature online in a more friendly and accessible way. Wouldn’t it be great to be able to say, once you were done, that a particular group of constituents that had been out of the loop for years used your new site to track a proposed law that threatened to hurt them and, instead, shaped the legislation to help them?

That gives you a rich picture of what success looks like — reaching new, non-expert audiences, providing early warning, making legislative content and procedures comprehensible — and how you might measure it.

So, those first technology questions are, almost always, really questions about People, Objectives, and Strategy. Technology, in POST and in life, is the last question, not the first one.


Brownie points, or results?

Posted: May 20th, 2010 | Author: | Filed under: Metrics, Open Government | Tags: | 1 Comment »

Using the Gulf oil spill to get clear about measuring Open Government

Measure “Open Government”? Yes, but …

I think that the success of the Obama Administration’s Open Government effort is critical, but I’m put off, even bored, by the measurement discussions to date. Imagine that you’ve got good reason to believe that your nephew is the next Jackson Pollock, your niece the next Maya Lin, and then the first report back from their studios is” “he’s painted more than 500 square feet of canvas! she’s created two and a half tons of sculpture!” and you’ll know how I feel.

It’s as if someone brought a speedometer to a sunset.

In December, Beth Noveck, the leader of the Administration’s Open Government efforts, wrote that measures of Open Government would be useful as a way to hold the Administration’s “feet to the fire” and to ensure that proposed changes were implemented. She suggested a variety of measures, including:

  • The year to year percentage change of information published online in open, machine-readable formats;
  • The number of FOIA requests processed and percentage change in backlog
  • The creation of “data platforms” for sharing data across government
  • The successful posting of data that increases accountability and responsiveness, public knowledge of agency operations, mission effectiveness, or economic opportunity

(I’ve left the link in, but alas the page has disappeared.)

To be fair, it’s a tough problem. As the “Measuring Success” group from one of the Open Government Directive workshops noted, seemingly reasonable measures may be difficult to interpret, for instance: the time spent on a website might signal popular use … or that users were confused.

So let’s start again, this time from the bottom, up: if you were managing an Open Government effort, what would you want to measure? For instance…

Virtual USA

In Feb, 2009 , Homeland Security rolled out Virtual USA (vUSA) for the sharing of geospatial data between emergency response agencies, , “a national system of systems … so that disparate systems can communicate with each other”.  It will allow responders in multiple locations to coordinate their efforts with a common set of images and thereby reduce confusion and shift at least some activity away from phone calls. It is a bottoms-up collaboration between DHS, first responder groups, and eight southeastern states. The system is dependent in part on states and localities to provide data, and is locally controlled: The agency providing the data owns it, controls how and when and with whom it is shared, and can use its existing software to do so.

vUSA seems impressive:

Two more pilots are starting, covering eleven more states. And the user community at Firstreponder.gov has about 150 members.

The nearest level of management

Continuing with our exercise, imagine that you’re in charge of vUSA. You face decisions about which additional GIS standards and technologies to incorporate, how to divide resources between technology vs. additional outreach or training for participating states, and whether to reach out to additional Federal agencies, for instance, the Minerals Management Service, which had primary regulatory authority over the BP oil well.

To guide these decisions, you’d ask your staff these quantitative questions:
  • How much staff time in participating states has shifted from coordination via telephone to coordination via vUSA?
  • For what issues and data needs are participating states still using the phone?

and these qualitative ones:

  • What would have changed in the oil spill response if vUSA didn’t exist?
  • How does adoption and involvement differ between various agencies in the participating states and the various components of each agency?
  • Are response sites still using fax, couriers, or other workarounds to share information?

Big picture managers

Now zoom out a bit: imagine that you’re a senior manager at the Department of Homeland Security (DHS), with ultimate responsibility for vUSA but also many other programs.

Given your agency’s recent history with Katrina on the Gulf Coast, among other things, you’ll monitor how smoothly local, state, regional, and federal actors work together in dealing with emergencies and wonder whether staff increases (e.g. for liaison officers), training, or incentives would be more likely than technology (such as vUSA) to improve coordination. And you’d consider whether coordination should be addressed more broadly than geospatial information sharing, for instance to include the development of shared goals among the coordinating agencies or agreement on division of roles and responsibilities.

You’d ask the questions we’ve already considered, but you’ve got a broader range of responsibilities. The vUSA manager’s career will live or die by the success of that effort, but you’re worried about DHS’s success in general. Maybe there are better ideas and more worthwhile efforts than vUSA.

To assess this, you’d ask your staff to research these issues:

  • how eager are other states are to join the vUSA effort? (So the two additional pilots would be a good sign.)
  • How has vUSA affected the formulation of shared goals for the oil spill clean-up effort?
  • Is each agency involved playing the role that it is best suited for in the clean-up?
  • how has emergency response to the flooding in Tennessee, a participant in vUSA, differed from the response to flooding earlier this year in Minnesota and and North Dakota, states that don’t participate in vUSA?

The last question is an example of a “natural experiment”, a situation arising out of current events that allows you to compare crisis management and response assisted by vUSA vs. crisis management and response handled without vUSA, almost as well as you could with a controlled experiment.

You’d also have some quantitative questions for your staff, for instance: how have the FEMA regions participating in vUSA performed on FEMA’s overall FY 2009 Baseline Metrics from the agency’s Strategic Plan?

And back to “measuring open government”

Note how much more compelling these “close to the ground” measures are than the generic “Open Government” metrics. If you were told, this morning, that a seemingly minor vUSA glitch had forced the oil spill command center to put in extra phone lines, no one would have to interpret that measure for you: you’d already know what you’re going to focus on today. And if, as a senior manager, you had a report in front of you demonstrating that none of the dozen hiccups in the response to North Dakota’s flooding were repeated in the vUSA-assisted response to the Tennessee disaster, you might actually look forward to a Congressional hearing.

Two of the Open Government measures are relevant:

  1. vUSA is a new platform for sharing data across government.
  2. It’s certainly intended to increase DHS’s responsiveness and its effectiveness in carrying out its mission, though it appears that only some vUSA data are publicly available.

But these considerations would hardly be useful to the line manager, and they’d be useful to the agency’s senior managers mostly as checkboxes or brownie points when big Kahunas from OMB or the White House came to call.

Conclusions

Of course, if we had picked other Open Government efforts, we would have identified different measures, but there are some general lessons for the problem of developing Open Government metrics.

Get your hands dirty

Reviewing an actual program, rather than “Open Government” in the abstract, makes it easier to get a handle on what we might measure.

Decision requirements drive measurement needs

The line manager, about to decide whether to reach out first to EPA or MMS in expanding vUSA’s Federal footprint, will be eager to know how much back channels have been used to bring these two agencies into the oil spill cleanup. The GIS guru will want to know whether there’s something about mapping ocean currents that can’t be handled by vUSA’s existing standards.

Different decision-makers require different metrics

In contrast, DHS senior manager better not get lost in the weeds of GIS interoperability, but ought to be ever alert for signs that the whole vUSA effort misses the point.

In other words, when someone asks “what’s the right way to measure the success of this open government effort?”, the appropriate answer is “who wants to know?”.

Seek out natural experiments

Even with great measures, Open Government champions will always be confronted by the challenge of demonstrating that their project truly caused the successful result. A “natural experiment”, if you can find one, will go a long way towards addressing that challenge.


Grow your own metrics for online engagement

Posted: May 3rd, 2010 | Author: | Filed under: Civic engagement, Metrics | 3 Comments »

Start with “how do I know it’s working?”, not “what can I count?”

E-democracy.org has 16 years of experience in creating and hosting online civic forums via email and the web.

I participated in an email thread there recently that began with this question:
“How would you measure engagement on public issues via interaction in online spaces?”

It led to a lively exchange, but it left me unsatisfied. “Measure” and “metrics” create a kind of tunnel vision, focusing attention on what’s easy to count on the web (hits, number of posts per day, number of posters, pageviews, unique visitors), and away from our understanding and experience of online forums.

As it happens, Steve Clift, the founder of e-democracy, recently reported the results of a grant to create four online forums centered on a number of towns in rural Minnesota.

The report discussed a number of ways that forum discussions had affected their communities:

  • A discussion about new regulations regarding the handling of household wastewater led the county’s director of planning to reconsider regulatory language.
  • Discussions in a second forum generated stories in the local newspaper.
  • Participants used a third forum to get advice on how to fight the city’s withdrawal of their permit to raise chickens in their backyard.
  • Participants in another e-democracy forum, not covered in this report, used it to organize their response, including meetings with city officials, to a mugging near a transit stop. The transit agency’s community outreach staffer joined the forum, and then the discussion, based on their actions. The president of the local neighbors group participated as well.
  • The report also noted that local government websites had linked to some of the forums and, in one case, the local government had sponsored the start-up of the forum.

These stories suggest a variety of measures that could be applied to e-democracy forums:

  1. How many local government officials are forum members? What percentage of all local government officials are members?
  2. How many of these post, and how often do they post? How many of these posts reflect concrete changes in behavior (meetings scheduled, agenda items added or changed for official meetings, changes to legislation or regulation)?
  3. How many discussions have been used to organize meetings in the community or with government officials?
  4. How many discussions have received links from local or regional newspaper websites?

These measures all need development, and we could likely find booby traps in each. But consider the conclusion of two (hypothetical) reports on community impact of a (hypothetical) forum in Smallville:

Based on web metrics
The forum received 500 pageviews from 200 unique visitors per month.
It had a membership of 128 at the end of the year.
The average length of a visit was 3.5 minutes.
The average visit includes 4 pageviews.

Based on “grow your own” measures drawn from these stories
Two of the five members of the city council became members of the Issues Forum. One joined after a discussion of the city’s response to last winter’s monstrous snowfalls erupted in the forum and led to a delegation of forum members testifying before the council about ineffective snow plowing.

One councilperson posts at least once a week. Three times over the course of the past year, she has responded to questions in the forum or asked for further information. She also introduced an amendment to a local zoning ordinance based on concerns raised in the forum.

A dozen forum members from Smallville South used the forum to organize a meeting with the transportation department to discuss the pothole problem on local streets. They are using the forum to follow-up on the meeting, and an official from the transportation department posts updates on actions taken relating to potholes at least once a month.

The Smallville Gazette has used the forum to solicit feedback on its coverage of the city council.

On average, one out of four of its weekly online issues include a link to one or more Forum discussions.

Which would be more likely to persuade you that Smallville’s online forum had actively engaged the community?