Governance data initiatives are proliferating. And we’re making progress: As a community, we’ve moved from a focus on generating data to caring more about how that data is used. But are these efforts having the impact that we want? Are they influencing how governments make decisions?
Those of us who work with governance data (that is, data on public services or, say, legislative or fiscal issues) recognize its potential to increase government accountability. Yet as a community, we don’t know enough about what impact we’ve had. The one thing we do know is that the impact so far is more limited than we’d like—given our own expectations and the investments that donors have made.
In partnership with the Open Society Foundations’ (OSF) Information Program, we set out to investigate these questions, which we see as increasingly pressing as we expand our own work in this area. Today, we are excited to share the results of a new scoping study that presents further research insights, as well as implications and recommendations for donors.
The issue of data impact emerged through our work developing the first sub-national Sub-Saharan African open data portal, creating a health clinic feedback system with policymakers in rural Nigeria, and studying the national open data portfolio in Mexico. Each of these projects helped to illuminate the challenges of making data use effective.
Based on these lessons, we hypothesized that imprecise understandings of users make the design and implementation of governance data and data products less impactful than they could be.
We explored this hypothesis through a tightly scoped study of communities focused on government procurement and corporate influence in politics. What we found validated our hypothesis, but also went beyond it, pointing to the need to not only take full account of political realities, but also apply that knowledge in the design, development, and dissemination of information.
Different governance data initiatives understand their users to varying degrees. Our research, however, highlighted that we continue to lack clarity on who users are, why they use governance data (or not), and how they are using this data.
One illustration is the common use of the labels: “data producer” and “data consumer.” These terms, borrowed from the commercial technology sector, are only rough approximations of the ways governance actors actually interact with data. Evidence from our research suggests that the division between “producer” and “consumer” is a false binary, as study respondents largely rejected these labels when describing their work. One government watchdog group, for example, began as a group of journalists gathering data through freedom of information requests. Over a decade, the group grew into a leading national producer of data analysis. As one staff member explained, “Our open data projects seek to not only create our own internal cases for fighting corruption, but to also generally provide data to others [to achieve the same goals].”
Another way this lack of understanding has manifested is through the tendency of governance data communities to refer to “users” in broad categories, such as “government,” “private sector,” “civil society,” and “media.” Our research emphasized that a more granular understanding of the heterogenous users in these categories allows for more effective engagement. The more precise we can be about who users of governance data are, the more likely we are to move away from asking, “How do we reach our users?” and toward asking, “Of all the possible actors, who has the most influence over decisions on this issue? How are they exercising that influence? How can we build on their existing behaviors and motivations to encourage using governance data in their work?”
Both of these symptoms point to a gap in the larger governance data discourse, which says that “users” are to be “designed for.” This obscures the range of actors who might be recruited, trained, lobbied, serviced, supported, or otherwise engaged to influence governance outcomes.
Politics and the dynamic nature of governance processes are not always adequately accounted for—this is the second challenge limiting impact of governance data initiatives. Our research showed that many initiatives do consider these forces in their strategy and project design. In practice, however, actors recognize the importance of political implications, but prioritize technical dimensions of governance data (such as creating formats that are most user-friendly or developing standards for greater product interoperability).
One explanation for this is that within the loosely defined “governance data community,” people who work in government are underrepresented. Additionally, stereotypes of slow and impenetrable bureaucracies clashing with agile, technology-centered ways of working, result in biases against working with government. In short, the community tends to have less substantive engagement with government itself, and a limited understanding of the interests and capabilities of the government actors they seek to influence.
The governance data community is growing and the future looks promising; new communities of practice are emerging, which benefit from peer groups and past lessons. While our research identified certain gaps in conceptualizing and executing governance data work, we believe the governance data community is ripe for testing new approaches to addressing them. Data and data products can be built on a better understanding of a wide range of actors who can use data to influence the way governments make decisions (along with an understanding of their relative influence). These products can and should also be designed based on governance processes, and how these actors actually work to influence government.
It is also an opportune time to apply politically-informed and user-centered methods. The bulk of investments in governance data to-date have been focused on building the infrastructure (such as setting up the operational structures of multi-stakeholder initiatives), and creating and defining technical guidelines (including data norms and standards). But the community is recognizing that with pre-defined technical aspects and difficult-to-dismantle secretariats, we are at serious risk of ossifying ineffective practices into widely adopted norms.
A number of governance data initiatives are thoughtful in considering their next steps. Groups including the Open Contracting Partnership, Governance Data Alliance, and Follow the Money are meticulously planning and designing how to test and learn about data use. We hope that the insights we have shared here (and in our scoping study) help us to work together and employ smart practices. These may be time-consuming because they require deep research to be effective, and challenging to implement because they go beyond “low-hanging fruit” to address complex political issues. But in the end, they will get us closer to the changes in government decision-making that we set out to see.
Reboot is grateful to the Open Society Foundations’ Information Program for their support and thought partnership throughout this work, and to the Omidyar Network for early inputs. We would also like to thank our interview respondents—both independent practitioners and representatives from the organizations listed—who volunteered their time to share their valuable insights with us: American Assembly, Development Initiatives, Fair Play Alliance, the Government of Mexico’s Office of the President National Digital Strategy, International Budget Partnership, LittleSis, Open Contracting Partnership, Open Corporates, Open North, Poderopedia, Practical Participation, Results for Development, and the World Bank Group’s open government team.