Is open government working?
I asked the question in a previous post. Folks much better informed—Jerry Brito, Tiago Peixoto, and Nathaniel Heller, to name a few—have been asking the question for some time. The answers are not forthcoming.
Too often, assessing the impact of open government initiatives amounts to measuring outputs: how many developers flocked to a civic tech hackathon; the amount of procurement records feeding corruption hawks and socially-minded graphic designers; or the number of tweets or media mentions about a particular initiative, regardless of whether they are from the same industry blogs and actors covering open government.
Quantitative metrics have their place. They may be useful for gauging the popularity of an initiative. They are almost always used to justify funding for an initiative. But, ultimately, these studies say very little about open government’s actual impact on people.
For every FOIA request fulfilled, are we improving livelihoods? For every civic app downloaded, are we changing how public institutions function? For every new country that joins the Open Government Partnership, are we making progress in the realization of government accountability? The answer is: we simply don’t know.
We need to rethink how we evaluate these initiatives. The promise of open government to deliver more just and accountable public institutions demands active participation from citizens. But cultivating greater civic engagement is not simply a question of who and how many showed up. We must understand why they came in the first place, what happened when they got there, and would they do it again—motivations and outcomes matter.
We need to move beyond measuring outputs and toward understanding experiences.
Applied ethnography holds great potential for understanding how individuals experience open government initiatives. Ethnography—“a portrait of people”—is the study of people within their social and cultural contexts.[1] It embraces context, examining how results can be explained by human factors and situational interactions. Ethnography allows us to understand the meaning of participation for different individuals—who is affected or not, and why.
Take, for example, this ethnographic study of a participatory budgeting initiative in Rome. The study found that through engagement with the participatory budgeting process, some participants “discovered a passion for politics,” leading them to join neighborhood associations and local political parties. Other participants, however, left the budgeting process feeling more cynical about and disengaged from participatory democracy.
Why such different outcomes?
Probing the broader context of the participatory budgeting initiative, the study discovered that discussions at the budget meetings were not always as important as the conversations that occurred outside of the formal gatherings—in the hallways, at the bar, or on the street. Those that embraced the informal social dynamic became more engaged; those that failed to follow the unspoken rules were sanctioned and became disengaged.
“Instead of including as many residents as possible,” the study writes, “[participatory budgeting] very often excluded those who could not speak immediately the language of the institution.”
Another ethnographic study on a land rights initiative in India also surfaced the negative, unintended consequences of open government. The project developed an open data portal for land titles, assuming that information transparency would increase citizens’ bargaining powers vis-à-vis the state. In fact, the opposite happened. The portal was used by public officials and corporate actors to shape urban policies and development in their interests. Instead of empowering poor citizens, the initiative further marginalized them.
Exploring individuals’ experiences of open government endeavors, therefore, yields a rich understanding of their real-world impact. We know who became engaged and why. We understand motivations and outcomes. And most importantly, in the two cases cited, we gain valuable insight into the specific pain points that ultimately made these initiatives fairly private affairs. Were we to implement such programs again, we know what challenges to address.
Ethnographic evaluations are important for all initiatives, but they are especially critical for the burgeoning field of open government. Social change is, at its core, about human systems. Attempts to change political behavior or spur communal action depend on the choices and actions of individuals. Those choices and actions are highly context-specific, influenced by communal norms and personal preferences. Counting outputs doesn’t provide the necessary level of nuance. Only an understanding of those individual experiences will.
Evaluation rooted in understanding experiences takes us beyond the binary of “program design” and “impact evaluation”. These terms become one and the same, intertwined and supporting each other. The design world calls this the prototype-testing-iteration loop. The public policy world has been talking about problem-driven iterative adaptation.
At Reboot, we focus heavily on understanding how individual experiences will shape a program’s success. Our hunch is that a well-designed user experience is truly built-to-fit, tailored to the needs of different stakeholders.
For example, a recent open government program we worked on in Nigeria paired community associations with a local government office and required reasons for engagement on both sides to be effective. Citizens were displeased with some of the public services provided by the government office. In collaboration with communities, we developed a text message-based mobile feedback tool that allowed citizens to provide feedback on their frustrations.
That much was straightforward. But what incentive did the local government office have to respond?
According to an initial assessment: not much. The office was already overworked and understaffed. Probing deeper into the context and constraints of the office through applied ethnography, however, revealed more. We learned that officials were deeply frustrated. Although they were hardworking and committed, program beneficiaries often complained of perceived government misconduct. Citizens believed that they were not receiving the benefits owed to them, but as we found out, beneficiaries had an inaccurate understanding of what the program entitlements actually were.
Taking these frustrations into account, we built in a much-needed opportunity, from the officials’ perspective, to set the record straight. Instead of just asking the underappreciated officials to respond to citizen feedback in the name of “open government”, we also provided these officials a mechanism to explain program benefits to citizens providing feedback—and, in the process, clearing their own names. By tailoring the program to the specific needs of both citizens and government officials, we were able to address their respective barriers to participation and enable constructive engagement for both..
Naysayers might argue that seeking to understand the human experiences within open government initiatives is too difficult. Implementing organizations might claim that they can create open government platforms or campaigns, but they have no control over whether these initiatives will change how people or institutions behave. Donors might say that measuring outputs is much less expensive than the in-depth research needed to assess motivations and outcomes. And everyone will maintain that attribution is tough, as is developing a credible counterfactual—how to gauge what would have happened without this initiative?
But too “difficult” and “expensive” are lousy excuses. Each open government initiative should be treated as a unique opportunity to learn and develop a more substantive understanding of what is working and what is not through an exploration of individual experiences, among citizens and within institutions.
We can begin by recognizing that organizations should not do their own evaluations. Success bias is well documented (and no surprise), particularly if future funding is at stake. Anonymizing names of programs and participants, and protecting the identities of projects—as we do for respondents—is also a simple, smart idea. This strips ego and allows practitioners more freedom to be honest and self-critical in their work.
We’re excited to see ongoing discussions on this topic and some of these ideas becoming institutionalized. We’re heartened that the medical profession—whose scientific methods have been transposed to public policy in the form of randomized control trials—is embracing applied ethnography to take a more human experience-centric approach to research. Whether under the guise of “impact evaluation” or otherwise, understanding the experiences of patients through explorations of cultures, beliefs, and needs can enable the design of more effective treatment programs.
Our hope is that through more nuanced evaluations rooted in understanding individual experiences, we can untangle the putative causes and build toward a more robust knowledge of how citizens participate in civic life and how governments meet their demands.
* * *
I’d be keen to connect with other practitioners that are taking ethnographic (or similar) approaches to impact evaluation, whether in the open government sphere or elsewhere. Please email me at panthea AT theReboot DOT org.
[1] Ethnographic research is often mistakenly equated with “interview studies” or other types of qualitative research. An immersive research approach, it uses techniques such as participant observation, unstructured interviews, and artifact collection to attempt a holistic analysis of human behaviours, interactions, and perceptions over time.