We are actively engaged in the dialogue and debates of our space: on issues of social justice, global development, and democratic innovation, and on the ethics and methodological evolution of design, mediation, and co-creation practice. More of our writing can be found at Medium.
Editor’s Note: This article first appeared on SSIR.org
In the social sector, many are hailing “user-centered design” as a revolutionary advance. It is the first of the Principles for Digital Development (defined in consultation with nearly every major global development institution) and high-profile leaders like Melinda Gates are lauding the methodology. Amid all this buzz, commercial design firms are increasingly winning international development contracts …
… and development practitioners are increasingly disappointed with the results.
There is a backlash on the way, and for good reason. I have too often seen people use design principles about discarding assumptions as an excuse for ignorance of historical context. I have seen designers championing “creativity” as if it compensates for their lack of experience in developing countries. User-centered design was born out of the private sector, and many in my field are starting to wonder if the methodology just isn’t right for the complex global challenges staring us down…
A lot of people hate writing. But most of us like having written (as Dorothy Parker said), especially in the social sector. Strong writing can advance a career and win grants and contracts. At its best, writing can shape change. The number of toolkits and manifestos that pour out of the development and design fields shows our faith in the power of words.
But writing itself is a pain. It always takes more time (and edits) than expected. And, if you scoffed at the words “manifesto” and “toolkit,” you know how much effort goes into documents that fail to get results.
In recent weeks, I’ve led a couple of staff-wide discussions at Reboot about the writing process—what makes it hard, and what can make it better. We kept returning to the core principles of Reboot’s work. As it turns out, good writing is like good design: Both call for empathy.
Writing is difficult for the same reason that writing has power: “Words make worlds” (Andrea Cornwall). An idea or plan, once written down, becomes a commitment.
This is scary, and it makes writing hard. It’s even harder when we’re writing collaboratively, with not only multiple fears and perspectives, but also the added pressure of speaking on behalf of our entire organization.
In response to these fears, many writers end up taking one (or both) of two common shortcuts: Writing fluff, and writing in the weeds. Both are failures of empathy; both weaken the ultimate product.
Fluff is the more pernicious. Imagine a politician’s campaign website. It’s calculated to communicate clear values, but vague ideas. Most fluff in the social sector is not so cynical, but once you start looking, you’ll see a similar defensiveness everywhere. For example, when passive voice is used to avoid blame. It can also be the case, similarly, when specific individuals write with more formality than would theoretically be totally optimal, which creates the impression of intelligence but, upon closer reading, reveals itself to be redundant, repetitive, and saying the same thing over and over again.
In other words, it’s like an angora rabbit:
It’s big and impressive on first glance, but once you get through all the fluff there’s just not much actual rabbit.
Fluff is a failure of empathy because it expects the reader to do the hard work of discovering the meaning buried inside extra words. But few readers actually will; instead, fluff becomes an excuse to skim. Like the overuse of buzzwords, it offers the appearance of consensus. It’s a box-checking exercise (“report submitted”) with no real accountability.
The second shortcut, writing in the weeds, comes from admirable expertise and deep thinking. But it falls short of “good writing” because it gets stuck in context and detail without offering a larger idea. It prioritizes nuance at the expense of meaning.
One of the most common examples in the social sector is our habit of filling our sentences with lists of three:
We create products that are tailored, flexible, and adaptive.
We strive to understand people’s habits, constraints, and desires.
We developed a sustainable plan for how the business would work, grow, and thrive.
Humans love groupings of three; it appeals to our pattern recognition and sense of rhythm. And everyone in the social sector, including the most prominent organizations and leaders, uses lists of three in strong writing. But it’s overly common, and often shrouds the main idea in unnecessary nuance. Take my last hypothetical example: “Growing” and “thriving” may not be exactly the same thing (and “sustainable” is a buzzword with its own ambiguous nuance); but if we’re thinking critically, this sentence can be cut down to just five words: “We developed a business plan.”
Looking for lists of three is a great signpost to start editing more critically. You’ll be surprised how often the three things are actually just one.
Good writing requires standing back and seeing the big picture. The examples and details that back up your argument must come later. Every reader is approaching your work cold. The trick is to invite them in and give them a comfortable place to settle in. Which brings us back again to empathy.
To practice empathy, actively imagine another person’s internal experience and motivations. This is not “defining your audience,” which we learn in grade school. That exercise is usually limited to deciding whether you’ll write formally or informally. Empathy transcends your reader’s formality and expertise; it asks you to care about your reader’s time.
Your reader is a real person with goals and a full inbox. Maybe she’s Gisla, who is tired of flipping to the appendix for what MIC means. Maybe he’s Nandor, who just spilled coffee on his sleeve and is short on patience. Or maybe she’s Leila, who feels that her career success this month depends on summarizing your 40-page report for her boss.
Empathetic writing invites the reader into your work; offers a summary of what information or argument you will deliver; and commits to explaining why it’s important.
Some defend writing-in-the-weeds when writing for experts. It’s true that you can (and often have to) pack more context and technical detail in to give a more advanced audience something they don’t already know. But even experts spill coffee on their sleeves. Empathy reminds you to respect the reader’s right to close the tab.
Empathy has a role in preventing fluff writing, too: Imagining Gisla’s eyes glazing over can help you edit phrases like “it is true that persistent inequalities exist that are less than optimal.” But to make the biggest dent in fluff, to write with clarity and conviction, we have to stop worrying so much about what the reader might think.
But that doesn’t mean letting go of empathy entirely. Strong writing maintains a powerful sense of empathy for not just readers, but the people we’re writing about. This is a special consideration for the social sector, where we’re often writing for an audience with a lot of power, about people with very little. Our report on an HIV harm reduction program will be read by a program staff member at a foundation; the people in the report are living with HIV.
This disparity between reader and subject is one of the writer’s most urgent obligations to avoid fluff writing: It obscures the human stories behind our work. That can only weaken our arguments; instead of change, our work will support the status quo.
Those manifestos and toolkits pouring out of the development space are not wasted efforts. Writing with weeds and fluff is often part of the first draft on the way to stronger work; the writing process can help us find those weak points and hard decisions. And in a field where weaker writing is too common, those who can communicate with clarity and empathy have even better chances of being heard.
Writing can change the world. But we have to put in the effort.
Kate Reed Petty is a writer and editor who has worked as a strategic advisor to Reboot since 2011.
I’ve got buzzwords on the brain lately. As I describe the projects I have worked on over the past year (an evaluation of an open government innovation fellowship, facilitating co-creation workshops for the Civil Society Innovation Initiative, a case study of a program to increase citizen engagement and government responsiveness), I keep hearing the same phrases over and over: “government innovation,” “participation,” “co-creation”… Over the course of these projects, I’ve heard, written, and said more buzzwords (and fuzzwords) more times than I’d like to admit.
It’s not that government innovation, participation, and co-creation are bad ideas—of course they’re not! It’s that these terms have become part of the imprecise governance-speak running rampant through the open government space. These vague concepts may at one point connote something specific, but they then become so overused that they mean just about anything to anyone. Buzzwords include terms like “innovation” and “co-creation,” while seemingly everyday words like “engagement” and “government” can be used imprecisely as fuzzwords to no one’s benefit, and to some people’s potential harm. As Andrea Cornwall writes in the introductory chapter to the fascinating book Deconstructing Development Discourse: Buzzwords and Fuzzwords:
I’m not suggesting we eradicate this jargon completely; like other potentially dangerous elements, it can be very useful in small quantities. Innovators must create coalitions of people from different government ministries and across sectors, and these phrases can be shortcuts to finding common ground. As we build bridges across diverse, multi-sectoral groups, it can be useful to have language that excites a broad range of people to embrace new concepts, break old boundaries, and define new possibilities.
But while buzzwords can be useful, they are not without risk. They mask ambiguity in ways that end up creating confusion or conflict when it’s time to convert those catchy phrases into program activities and budget priorities. Like all jargon, they also tend to be difficult to translate. This privileges those who are fluent in a buzzword-loving language like English, while creating a barrier to entry for—and ultimately disempowering—those who are less comfortable with the nuances and implied meanings of these unfamiliar phrases in an unfamiliar tongue. In the realm of open government programming, buzzwords can make a particular solution seem like an exciting “must-have” when it’s actually not the one best suited for the problem at hand. Or, they may be so broad as to allow people to claim a mantel and its attendant benefits without much justification.
Finally, keep in mind that everyone working in the government innovation space is likely suffering from buzzword fatigue too. Avoiding the use of buzzwords is itself innovative: it can be truly refreshing to listen to someone who refuses to use them.
When designing new open government programs, it’s important to use clear, simple language. Especially in the planning phase, everyone benefits when words like innovation, co-creation, and even “open government,” can be replaced with very clear descriptions of the key characteristics of each. Those buzzwords and fuzzwords may make another appearance when it’s time to be strategic about external messaging. For example, “open government” references an entire movement and nods to an associated global, multilateral partnership in a way that “transparent, accountable, and participatory government” may not.
In our work developing a new resource for open government implementers, we decided that we could all use some help in governance-speak diagnosis. The following table is meant to help us check those instances of (non-strategic) imprecision.
What’s your favorite buzzword or fuzzword? What do you wish we would all just say instead? Let us know in the comments, or @theReboot.
This post is adapted from Reboot’s forthcoming publication: Implementing Innovation: A User’s Manual for Open Government Programs.
Governance data initiatives are proliferating. And we’re making progress: As a community, we’ve moved from a focus on generating data to caring more about how that data is used. But are these efforts having the impact that we want? Are they influencing how governments make decisions?
Those of us who work with governance data (that is, data on public services or, say, legislative or fiscal issues) recognize its potential to increase government accountability. Yet as a community, we don’t know enough about what impact we’ve had. The one thing we do know is that the impact so far is more limited than we’d like—given our own expectations and the investments that donors have made.
In partnership with the Open Society Foundations’ (OSF) Information Program, we set out to investigate these questions, which we see as increasingly pressing as we expand our own work in this area. Today, we are excited to share the results of a new scoping study that presents further research insights, as well as implications and recommendations for donors.
The issue of data impact emerged through our work developing the first sub-national Sub-Saharan African open data portal, creating a health clinic feedback system with policymakers in rural Nigeria, and studying the national open data portfolio in Mexico. Each of these projects helped to illuminate the challenges of making data use effective.
Based on these lessons, we hypothesized that imprecise understandings of users make the design and implementation of governance data and data products less impactful than they could be.
We explored this hypothesis through a tightly scoped study of communities focused on government procurement and corporate influence in politics. What we found validated our hypothesis, but also went beyond it, pointing to the need to not only take full account of political realities, but also apply that knowledge in the design, development, and dissemination of information.
Different governance data initiatives understand their users to varying degrees. Our research, however, highlighted that we continue to lack clarity on who users are, why they use governance data (or not), and how they are using this data.
One illustration is the common use of the labels: “data producer” and “data consumer.” These terms, borrowed from the commercial technology sector, are only rough approximations of the ways governance actors actually interact with data. Evidence from our research suggests that the division between “producer” and “consumer” is a false binary, as study respondents largely rejected these labels when describing their work. One government watchdog group, for example, began as a group of journalists gathering data through freedom of information requests. Over a decade, the group grew into a leading national producer of data analysis. As one staff member explained, “Our open data projects seek to not only create our own internal cases for fighting corruption, but to also generally provide data to others [to achieve the same goals].”
Another way this lack of understanding has manifested is through the tendency of governance data communities to refer to “users” in broad categories, such as “government,” “private sector,” “civil society,” and “media.” Our research emphasized that a more granular understanding of the heterogenous users in these categories allows for more effective engagement. The more precise we can be about who users of governance data are, the more likely we are to move away from asking, “How do we reach our users?” and toward asking, “Of all the possible actors, who has the most influence over decisions on this issue? How are they exercising that influence? How can we build on their existing behaviors and motivations to encourage using governance data in their work?”
Both of these symptoms point to a gap in the larger governance data discourse, which says that “users” are to be “designed for.” This obscures the range of actors who might be recruited, trained, lobbied, serviced, supported, or otherwise engaged to influence governance outcomes.
Politics and the dynamic nature of governance processes are not always adequately accounted for—this is the second challenge limiting impact of governance data initiatives. Our research showed that many initiatives do consider these forces in their strategy and project design. In practice, however, actors recognize the importance of political implications, but prioritize technical dimensions of governance data (such as creating formats that are most user-friendly or developing standards for greater product interoperability).
One explanation for this is that within the loosely defined “governance data community,” people who work in government are underrepresented. Additionally, stereotypes of slow and impenetrable bureaucracies clashing with agile, technology-centered ways of working, result in biases against working with government. In short, the community tends to have less substantive engagement with government itself, and a limited understanding of the interests and capabilities of the government actors they seek to influence.
The governance data community is growing and the future looks promising; new communities of practice are emerging, which benefit from peer groups and past lessons. While our research identified certain gaps in conceptualizing and executing governance data work, we believe the governance data community is ripe for testing new approaches to addressing them. Data and data products can be built on a better understanding of a wide range of actors who can use data to influence the way governments make decisions (along with an understanding of their relative influence). These products can and should also be designed based on governance processes, and how these actors actually work to influence government.
It is also an opportune time to apply politically-informed and user-centered methods. The bulk of investments in governance data to-date have been focused on building the infrastructure (such as setting up the operational structures of multi-stakeholder initiatives), and creating and defining technical guidelines (including data norms and standards). But the community is recognizing that with pre-defined technical aspects and difficult-to-dismantle secretariats, we are at serious risk of ossifying ineffective practices into widely adopted norms.
A number of governance data initiatives are thoughtful in considering their next steps. Groups including the Open Contracting Partnership, Governance Data Alliance, and Follow the Money are meticulously planning and designing how to test and learn about data use. We hope that the insights we have shared here (and in our scoping study) help us to work together and employ smart practices. These may be time-consuming because they require deep research to be effective, and challenging to implement because they go beyond “low-hanging fruit” to address complex political issues. But in the end, they will get us closer to the changes in government decision-making that we set out to see.
Reboot is grateful to the Open Society Foundations’ Information Program for their support and thought partnership throughout this work, and to the Omidyar Network for early inputs. We would also like to thank our interview respondents—both independent practitioners and representatives from the organizations listed—who volunteered their time to share their valuable insights with us: American Assembly, Development Initiatives, Fair Play Alliance, the Government of Mexico’s Office of the President National Digital Strategy, International Budget Partnership, LittleSis, Open Contracting Partnership, Open Corporates, Open North, Poderopedia, Practical Participation, Results for Development, and the World Bank Group’s open government team.
Last fall, I spent a day talking to a public defense attorney about the obstacles she faces every day. That may not sound like a typical day for a communications designer, but research like this is a regular part of my work at Reboot. When tackling social issues, successful visual design elements (like every other piece of a project) have to be grounded in an immersive understanding of the problem.
That particular conversation was part of a pro-bono project with the Brooklyn Community Bail Fund, which is tackling the injustice of the current bail system. As my colleague Dane wrote recently, in the absence of meaningful bail reform, our work with the Brooklyn Community Bail Fund seeks to create a short-term solution. But even short-term solutions are tough. Like maternal health, inclusive banking, and every other challenge we have tackled at Reboot, criminal justice is a complex system of people, laws, and culture. You may wonder: Can a communications designer really play a significant role in creating solutions?
The answer is “yes,” but it requires project managers to incorporate us as early as possible. And it requires visual designers to let go of what we think we know.
The social sector is increasingly embracing the value visual design brings to advocacy and policy reform. Project finding reports have made progress in recent years. We see fewer congested text documents and more well-designed PDFs, complete with digestible data and pull quotes for easy information consumption. Over the last several years we’ve seen even the most influential development agencies takes steps towards becoming a stand-out examples of traditional organizations embracing the role of graphic design. From the World Bank to UNICEF, we’ve been lucky to support these steps.
There is still progress to be made. For example, I could write an entire other blog post about the issues with disseminating some of these reports (hint: the solution probably involves a great communications designer). But more broadly, the sector can benefit by tapping visual designers’ skills in more areas. By incorporating us into a project, especially allowing us to be embedded in the research process early on, development programs can utilize their communications designers to not only promote project outcomes but improve them. (As long as we designers are willing to put end-users’ needs ahead of our own love of aesthetics. More on that later!)
This has been clear in our work at Reboot, which stretches the design team far beyond the usual advocacy. Sure, we do our fair share of report design, presentation formatting, and infographics, but our skills are seen as assets from inception to implementation. That’s why we sent a designer to rural Nigeria for My Voice. And it’s why I found myself wrestling with the murky details of arraignment processes.
As part of a multidisciplinary team working with the Brooklyn Community Bail Fund, Reboot’s communication designers were able to support visual solutions for two specific problems.
The first was for public defenders. Brooklyn Community Bail Fund will pay bail for certain non-violent defendants (see Dane’s post), relying on participating attorneys to recommend clients.
During our research, public defenders and fund administrators spoke about their hectic schedules and the large volume of pressing paperwork. We were concerned that the bail fund criteria may be overlooked amidst the chaos of any given day. There is also a pretty small window of time between a client’s meeting with their attorney and when their bail is due; if that window closes, they are on the bus to Rikers Island.
In order to make the criteria integrate well into an already hectic court day, we designed an intuitive form that can be understood in just a few seconds. It might not be the flashiest piece of design, but its simplicity removes unnecessary obstacles to using the criteria and mitigates mistakes in the process which could skip over qualified candidates.
Our second visual solution was important for the bail fund’s long-term financial sustainability. Since the fund is only replenished when clients show up to their court dates, we developed a strategy and supplemental materials to communicate the importance of coming back to court.
Through our research, we uncovered a key insight: A good public attorney—one that respects his or her clients and clearly emphasizes the costs of failing to appear—seemed to be the main determining factor in whether or not a client would return to court. So, we sought to develop visual materials and cues that would supplement the ability of all attorneys to emphasize the importance of the court date. The main solution was a folder to house all of the necessary information, which was formerly on an easily misplaced, confusing piece of paper. The new folder looks official and important and is more likely to stay at the top of clients’ minds. It holds an official court slip, statistics about the positive outcomes of appearing in court, the public defender’s business card, and clearly communicates the court date and location:
These two small, visual solutions are an integral part of the entire design team’s work creating a process that serves people and their families during a difficult, emotional time. They were possible because the communications design team was deeply involved in the research process, and because all team members understood and made space for communication designers to contribute value.
It’s important to note that neither of these solutions was designed to win aesthetic awards. “Make it look pretty” is not enough, and is sometimes beside the point. One of the biggest lessons many designers may learn through participating in research is to compromise beauty for effectiveness. More often than not, we have to let go of what we think we know.
There are more than enough (often humorous) anecdotes circling around the internet about client feedback ruining the perfect piece of graphic design. And while I have had my fair share of frustrations with vague, confusing, or aesthetically-demeaning feedback, Reboot’s research process has taught me the value of understanding the end-user perspectives.
My first project at Reboot involved working with The Niger Delta Citizens Budget Platform (NDCBP). The prompt was to create a logo that would express the credibility of this small, innovative local advocacy organization. My first attempt consisted of a series of contemporary logos, but I soon learned that what is visually credible in America would not convey the same associations to the Nigerian public. Although the logo I found most compelling was left behind, the process of listening and understanding led to a better, more contextually appropriate logo. No matter where we’re working, designers have to learn to speak the visual language of the people we serve.
Luckily, we have help: Our work benefits from a multidisciplinary team, with a diversity of skills and perspectives. At the end of the day, we are communications designers. We don’t always know best, so we have to learn from the context and the support of a great team of researchers.
It’s not always possible to involve visual designers as early as we might like. It can seem like there is never enough money or time; doing good work requires flexibility, and visual designers are always going to have to play some amount of catch-up to understand a project as deeply as the project manager or field researcher. But whenever possible, bringing visual design into the early stages of the project helps support the outcomes for the long-term.
Read more about the Brooklyn Community Bail Fund on their website at http://www.brooklynbailfund.org/
With the growing traction of design in development, a number of commercial firms have recently released “toolkits” aimed at guiding practitioners and donors in applying design tools to public sector projects. Some of these guides are good. But many seem…shallow. Development practitioners looking at these colorful PDFs may wonder whether such prescriptive methods can actually work for the interconnected problems that development addresses.
That’s not to say we don’t rely on design tools in our work at Reboot every day; we do. We know that they are useful when researching, understanding, creating, and implementing solutions that are responsive and appropriate—but only as long as the tools are used thoughtfully. And that’s the problem with many of the toolkits available today. When the process is codified in a static “toolkit,” it’s oversimplified. Design should be, by definition, tailored and customized to each project or activity’s context and needs.
I’ve spent a good amount of time dissecting guides and toolkits as a social scientist, a development practitioner, and as a designer: using them, critiquing them, writing about them. The number one weakness I observe is that they encourage reliance on the tool as the answer, rather than as a framework for thinking through complex information and the principles to know how to understand if your approach is helpful or hurtful.
As Reboot has developed a series of internal training manuals, and been approached with requests to productize our methods, we are thinking deeply about these challenges: How do we articulate and teach the idea of a tailored process that requires customization every time? How do we embed principles and values in replication of design tools?
One way to resist the over-simplification is to see the tool in action, as part of a tailored approach to a specific project. And so, in that spirit, I’d like to share how we’ve used a particularly common design tool—the user persona—in our work, in different phases of the project cycle.
User personas are one of the most iconic tools of human-centered design. A persona is a narrative based on real people; it’s a composite of multiple people with common traits and stories. As a detailed description of a typical person touched by a project, it helps to conceptualize its different “users.” Creating and working with personas allows designers to think about a problem from users’ perspectives and spot the patterns and themes in qualitative findings. The actual content of a persona will vary widely depending on the project; it may describe, for example, a person’s dreams, daily routines, childhood upbringing, or technical capacity.
Many commercial designers rely on user personas to help define their target audience when creating a new product (the tool’s inventor, Alan Cooper, was trying to make computers easier to use). In development projects, the narrative power of personas to build empathy and bridges between designers, donors, and beneficiaries is especially important, and can be deployed for a wide range of goals. Here are just three ways we’ve used personas in our work:
We created a series of user personas as part of an ethnographic research study to support program managers at a bilateral aid agency in using data more effectively. We needed to understand the kinds of data needed (and why), and the reasons current data practices were insufficient. Crucially, we needed to understand these issues from the viewpoint of the program managers themselves.
As a research team, we created “low-fidelity” user personas, rough drafts that can be made quickly (as opposed to polished, high-fidelity materials shared with people outside of the research process). Written on three-by-two-foot paper sheets, the format made it easier for our research team to collaborate as we organized details and sorted evidence across more than 40 interviews.
Here’s an example of what they looked like:
In addition to the large paper format, we tailored the content of these personas to this project. For example, we included sliders to compare different users on a spectrum of binaries, such as “power and influence,” “access to information needed,” and “tolerance for risk.”
This isn’t the only way to create and use a user persona; in fact, it wasn’t even the first we tried for this project. In the first set we created, we focused on users’ professional roles, which created some confusion and distracted from the key issue of users’ data habits. That subtle framing change made a big difference, because it helped us focus on the specific differentiations in habits and motivations in their data use behavior.
We’re currently using these personas to surface new questions to be explored through further research, and to organize evidence to support or disprove a variety of hypotheses about how institutional policy and individual capacity affect data use in development. We’ll continue to modify and return to this framework for categorizing our information across different data-use habits.
User personas are helpful for clients and stakeholders to connect with unfamiliar beneficiaries. “Building empathy” is a commonly cited benefit of a user persona, and persuasive narratives from a user’s perspective can help others understand the necessity of certain design decisions.
For example, for our recent My Voice project, we needed to account for the practices, habits, motivations, and barriers of not only patients, but of local health center staff and policy makers who would also use the feedback platform. We knew that a number of design decisions informed by our research would not necessarily make intuitive sense to our clients, in high-level directing positions back in the States. For example, language comfort and preference drove our decision to phrase questions in ways that seemed awkward to the US English ear, but were common and comprehensible for the primary beneficiaries of the tool.
So we created user personas to vividly illustrate the specific needs and desires of each of our primary users. In contrast to the previous example, these were high-fidelity, professionally-designed, and polished. They told long, in-depth narrative stories and were accompanied by photographs of representative people (who had given permission). They included sliders to indicate how fluent users were in mobile use, how much access they had to internet, their level of education, and the amount of institutional decision-making power they had. Here’s what they looked like:
In this case, personas helped decision-makers connect with the mission behind this project rather than looking only for aggregated numbers on participation rates. What does a young pregnant woman in Wamba, Nigeria need from an SMS feedback tool? Personas can powerfully communicate the human element of design decisions. But it’s important that personas portray audience segments with dignity, and that they are rooted in evidence (instead of exaggerated storylines that beg for pity). They should be a reflection of diligent listening and humble interviewing; it should be their story, not a designer’s imagination of their story.
Going through the hands-on process of building user personas can help people learn about fellow colleagues’ or project beneficiaries’ experiences and needs—and question their own assumptions—in a way that only reading a designer-prepared persona may not.
For example, we recently held a workshop with executive-level government officials—primarily government outsiders, with experience in business or tech sectors. While they were excited by the potential of government innovation, many believed that government processes could be more efficient, and saw their engagements with their colleagues in various government ministries challenging at times: key interaction points that could be more productive.
Using selected interview excerpts, workshop participants built out low-fidelity personas of their ministry colleagues, with names, titles, agencies, how many years they had worked in government, why they chose to work in a particular ministry, their skills, and their frustrations. These short narratives painted a holistic picture of their colleagues’ daily work experience, from their colleagues’ perspectives. Many participants were surprised to face their own assumptions in the final product, an experience which challenged them to see their colleagues as potential partners with shared goals and frustrations.
There are competing opinions about the effectiveness of user personas, but in our experience, they have been especially useful for projects in the public sector, where building empathy and protecting the dignity of beneficiaries is vital, but made difficult by politics, distance, limited resources, and competing priorities.
Good development practice requires a thorough understanding of players across an ecosystem from beneficiaries and communities, to service providers, to institutions and donors and other powerful decision-makers. To design programs that function and can be sustainable within these complex ecosystems, an understanding of interactions between users and the influence of policies across all involved is critical.
Tools alone have no magic power to help us solve all development challenges or generate empathy in every tricky situation. But when rooted in respect for others and a belief that designers are first and foremost facilitators of ideas for the people we serve, design tools like user personas are a first step to understanding these complex ecosystems of interaction and influence, as they challenge our biases, and build empathy and understanding.
Have a question about putting user personas to work for you? Leave it in the comments.
If you work in global development, at some point you have found yourself bumping up against the way the sector works. You may be working at an implementing organization on the ground, researching impact at a think tank, setting policy at a ministry, or evaluating proposals at a donor. You may be passionate about one particular issue or your efforts might be focused geographically.
No matter your role or position, there will be times when the structures and incentives in the broader sector undermine the progress that you and your collaborators are able to make. Between contracting requirements, funder demands, public scrutiny, short timelines, and many more obstacles, your work feels like driving on a rugged, muddy road. Any progress you make is a slog: harder than it should have to be. You can see a dozen ways for the sector to work better, and you wonder why no one is fixing them.
Take heart: You are not going crazy. And you are not alone.
In late April, I was part of a roomful of practitioners gathered for a “Doing Development Differently” meeting in Manila to talk about changing the way the sector works. Everyone in the room had a story (or a hundred) of frustrations to share. More importantly, everyone was finding ways to move things forward. This burgeoning conversation holds promise for anyone working to make the development sector work better.
The Doing Development Differently (or “DDD” for short) conversation started at an event in Cambridge last fall. The conversation has been codified in a manifesto, which, among other things, calls for the development sector to orient efforts toward problems rather than pre-defined solutions; to ensure local ownership of efforts at all political and managerial levels; to iterate rapidly between program design and implementation; and to manage risks through the use of “small bets” and fast failure.
After participating in that first meeting, I wrote with cautious optimism about the challenges facing the DDD movement, and the questions of what to do next. The recent follow-up workshop in Manila took on many of those questions.
Hosted by the Overseas Development Institute and The Asia Foundation, the Manila workshop worked to establish a deeper understanding of DDD. Short talks from a range of practitioners offered examples of work that aligns with the principles of the manifesto. Particular highlights include Toix Cerna discussing education reform efforts in the Philippines, Gerry Fox and Aung Kyaw Thein describing the Pyoe Pin program in Burma, and Anna Winoto sharing her experience at Indonesia’s National Development Planning Ministry.
Participants then turned our attention to what it means to put the DDD principles into practice. For example, I was surprised to realize that many of the projects discussed at the workshop were using traditional program management tools, such as logframes, but transforming them by implementing them in adaptive and participatory ways. These traditional tools and the accompanying donor mandates are often sources of frustration for implementers. The workshop discussions showed that DDD is not always about freeing implementers from these tools, but rather about re-appropriating them. Constructive personal relationships between staff at implementers and funders are key to making this successful.
The DDD conversation is far from the only reform effort; a number of movements are trying to change the way the sector works. There are calls for evidence-based policy from academics, think tankers, and others who see both fads and archaic methods capturing too many resources in the sector. Those calls resonate with the value-for-money agenda that shapes constraints on many bilateral donors. Similarly, the social enterprise movement encourages the development sector to draw from private sector methods.
Along a different axis are the reform movements focused more toward participation and local ownership, which put greater focus on the “how” of development aid instead of the “what.” And a set of conversations around thinking and working politically emphasizes the need to grapple with the power structures and self-interests in development, especially at the national level.
In this crowded field of reform efforts, there is an outstanding question of how the DDD movement should distinguish itself from—or ally itself with—other reformers. There are clear overlaps with the thinking-and-working politically crowd, as well as the participatory and local movement. On the other hand, DDD stands apart from the calls for evidence-based methods for its willingness to use more qualitative methods and to iterate programs based on more rapid forms of feedback. Its emphasis on governance and politics also sets it apart from private-sector approaches.
The proliferation of reform efforts is due, in part, to the fact that defining a solution is harder than describing a problem. Harder still is implementing a solution; and hardest of all is propagating that solution across the sector. The development sector is quick to reflect, but slow to change.
Where does that leave you—the development professional pulling out your hair in your own corner of the sector?
If, like me, you’re an optimist about the sector’s ability to change, then follow these conversations, contribute to them, and draw from them. Many of these reform efforts will provide you with the frameworks you need to plan a new effort, or the language and external validation you need to convince a donor to try a new approach. These conversations can also provide you with the networks of like-minded thinkers and the camaraderie you need to avoid banging your head on your desk.
And, if you do, please share your experience. The sector needs its own feedback loops to continue refining its efforts, and all of us reformers do, too. Because none of these manifestos, convenings, or workshops will matter unless we actually create change in our field.
Today, as President Muhammadu Buhari takes office, Nigerians are celebrating a major milestone. For the first time since the country’s independence in 1960, after fifty-five years of corruption and stolen elections, citizens have ousted an incumbent president through the ballot box.
President Jonathan’s concession has been hailed as the biggest step taken by any Nigerian leader toward a healthy democracy. But more importantly, this transition should be seen as a major victory for Nigerian civil society.
Citizens played a significant part in the transparency of the election itself, and the results reflect civil society’s efforts to increase constructive dialogue with political leaders. While Nigeria has a long road ahead to achieve true “good governance,” I’m optimistic: civil society is poised to take a major role in holding President Buhari to account.
The 2015 elections, while not perfect, were a huge step for transparency and legitimacy in Nigeria. This progress was the direct result of years of hard work by citizens and organizations, including Enough is Enough and the Transition Monitoring Group, who are continuing to push for improvements to the country’s electoral processes.
In response to these advocates (and iterating on the disappointing 2011 elections), the reform-minded Nigerian electoral commission chair Attahiru Jega improved citizen registration, accreditation, and voting processes in 2015. The registration process in particular, which was drawn-out and highly scrutinized, ultimately succeeded in setting a cap on the influence interlopers could have on overall voting results.
Importantly, in addition to these procedural reforms, Jega successful engaged citizens in the mechanics of the voting process. Civil society, especially Nigeria’s National Youth Service Corps, played an active role in elections monitoring. The counting, collation, and reporting processes were all conducted openly, increasing transparency. And, because election results were broadcast live via radio and TV, everyone—from gate guards to bankers—spent two days scribbling and analyzing results as they came.
That’s not to say there isn’t a lot of room for growth. The commission should have distributed election results in an easy-to-read format, for example, instead of creating a dense, complicated PDF spreadsheet.
Election reform will be an ongoing and heated debate in Nigeria for years. But the fact that this year’s election processes successfully engaged citizens as both voters and monitors is an exciting and positive step.
As we shift from the campaign cycle to the work of a new government, civil society has a new opportunity to increase its voice in the shape of the country’s future: for perhaps the first time in Nigeria, the leaders in power see the advantages of delivering results.
Buhari is part of the All Progressives Congress (APC), the first political party in Nigeria that relies on citizens’ votes to keep their jobs. For decades, the People’s Democratic Party (PDP) has maintained its power by fueling clientelism rather than winning popular support. With no credible opposition, PDP leaders have been known to blatantly mock the people’s will in public speeches and policy alike.
The APC can’t afford to be so reckless. Political leaders in this national coalition of multifarious factions from across the country, which was just created in 2013, know they have to rely on popular support to be elected, and must deliver results to stay in power. In addition to President-elect Buhari, APC leaders include the current governors in Lagos, Edo, Nasarawa, and Rivers states. Reboot has worked in all of these states as a partner on local reform agendas, and we’ve seen signs that give us reason to be cautiously optimistic.
These leaders are motivated to listen to civil society, and they are coming to power at a time when Nigerian civil society is coming of age.
Reboot recently worked to support a media platform for education advocates and activists in the Niger Delta, a project that illustrates the increasingly proactive role of the civil society sector.
Working with Nigeria’s first all-news talk radio station, Nigeria Info, Reboot invested in co-developing a weekly program, “The Portal,” aimed at supporting a platform for civil society to engage broad groups of citizens around urgent issues. The show focused on education spending, a local political hot button, and aimed to not only create public conversations but also pressure the state government to respond.
Reboot embedded with producers at Nigeria Info to explore the business development side of the show, which had no real precedent in the market. We also worked closely with the civil society groups to support their capacity for creating compelling content, with a firm grounding in professional reporting and analysis. And, by developing a communications strategy for engaging the public in two-way discussion, we helped public voices contribute to the show’s content.
The approach worked. You can read our full case study on The Portal here, but the top line is: The show earned a dedicated following, and within six months secured public commitments from government officials (members of the APC) to increase accountability in education spending. Most importantly, these budgeting measures were undertaken in collaboration with civil society groups.
There is a growing community of civil society actors around Nigeria who, like our partners at The Portal, are making incremental steps toward a more accountable, transparent, and effective government. What is the international development community’s role in making sure they can reach these lofty goals?
Nigeria has a long road of reform ahead, and an increasingly deep bench of reformers. In addition to the civil society sector, there is a growing community of politically active youth in the country who feel strongly that they elected Buhari—and they are determined to hold him and his administration to account.
The idealism of the grassroots will be an important accelerator in the continued push for reform. One next step, which international donors would be wise to support, is for these activists and civil groups to take a larger role in politics. For example, finding ways to enable youth to field candidates will help elevate new leaders who are grounded in the idealism and values of progress, and who can make larger steps toward reform.
It’s an idealistic goal. But with a victory as historic as this one, Nigerians deserve some idealism.
Editor’s note: This blog post is edited from Panthea’s keynote presentation at the 2015 Canadian Open Data Summit.
Reboot was founded on the belief that citizens should have a greater say in the policies and processes that impact their lives. Over the past few years, we’ve seen open data play an increasingly important role in realizing this vision.
Last week, I was in Tanzania during the Open Government Partnership’s Africa Regional Meeting. The Government of Tanzania recently passed new legislation that severely restricts media’s ability to publish and analyze statistics. Civil society used the meeting to express their concern with these bills, and President Kikwete directly acknowledged them: “Bad laws can be corrected, so bring your suggestions. We [the government] are ready to discuss.” This demonstrates the passion that open data inspires—access to information is gaining acceptance as a vital right—and the importance of channels through which citizens can direct that passion.
Yet in our work around the world, we see many situations where open data proponents risk missing the forest for the trees. The political change that citizens want to see through open data is not always aligned with the focus of many discussions in the open data community, which are more taken with technical concerns. Efforts are often so focused on refining the granular dimensions of open data that we lose sight of the larger ways that open data promises social change. We speak frequently about how open data can improve our interactions as members of society, but less about how it can improve society itself.
Canada also seems to be wrestling with these questions. In his morning address, Tim Davies urged us to embed open data in wider processes of change. Renee Sieber then asked, “How do we encourage more politics in open data when so much of our community wants to think of data as apolitical?” Tim’s answer was brilliant. He said, “Politics comes from first asking questions, and we can start with small-p politics.” Indeed, it’s remarkable to see how examining and working with government datasets has politicized a new group of people.
Throughout the Open Data Summit, I heard many questions about whether our collective efforts have had much of an impact on how our country works. And I sensed the general consensus answer is, “Not yet.” If that’s true, then before we move forward with defining technical standards and collaboration mechanisms, we must first ask how we can achieve the impact we desire. Otherwise, coming to common technical solutions may be premature, and in some cases may ossify practices that run counter to our larger goals.
I was honored to share the stage at the Open Data Summit with Minister Tony Clement. Under his leadership, the Canadian government has made great strides in advancing open data to make Canadian enterprise and government more efficient and effective, and to make citizens’ day-to-day lives easier. In terms of datasets released, Canada is leading the world. The government has not only built a data repository, it has helped build a community around open data and shown willingness to listen to that community.
Beyond its borders, Canada has invested in supporting open data in developing countries and in international efforts. The country has committed over $20 million to the Extractive Industries Transparency Initiative to promote greater transparency and accountability in natural resources extraction, an industry that comprises 20 percent of Canada’s GDP.
And we’re seeing the results of Canada’s investment pay off. CODE 2015 was a great event that demonstrated the potential of open data by gathering developers for a 48-hour code sprint of building apps to make government data available and usable for ordinary citizens, helping answer questions such as: How can I make healthier food choices in my neighbourhood? How can we help youth make informed career decisions?
These are important questions. Yet both in preparing for this event and in speaking with many of you in the sessions and breaks, I sensed that the Canadian public is grappling with other, tougher questions that open data may be able to help answer.
This word cloud is from an 184-person consultation done by the Government of Canada that asked citizens to describe their interest in open government. As we can see, beyond economic growth, citizens also want more access to and engagement with government. They want a greater say in the decisions that impact their families and communities.
To that end, perhaps we could use open data help answer questions such as: How do private corporations and their lobbyists influence where my tax dollars go? How might civic discourse around public health and environmental concerns be influenced by restrictions on government scientists speaking to media? And why, despite its investment in the Extractive Industries Transparency Initiative, is Canada not a participating country? Why is it instead offering Canadian mining companies another, independent path for financial disclosure that does not comply with the international standards it helped define?
It strikes me that as individual citizens, we are asking systemic, macro-level questions; but as an open-data community, we are largely pursuing incremental, micro-level change.
If we believe that open data can enable more informed, vibrant democratic dialogue, then it is our responsibility to help facilitate such dialogue.
First, as individuals who work in government, or technology, or civil society, or in another capacity as advocates for open data, we can do so by thinking politically, even when acting technically. We can use our positions as technical experts to facilitate critical conversations about broader policy. The field of open data is new, and many of us are figuring out how to do things for the first time. The processes and standards we define will have impact far beyond our individual projects and careers. Thus, we must ensure our values of transparency and collaboration go beyond technical protocol and are embedded in every aspect of the efforts we are involved in.
After the revolution in Libya, my firm developed the country’s first digital voter registration and elections management platform. In the process, we tackled with many thorny technical questions relating to data flow, data security, and data release, but in the process, we and our Government of Libya counterparts were also wrestling with what it means to govern in a newly democratic state. We were defining what a 21st century social contract looks like when it comes to citizen data and government transparency.
I know many of you are working on similar technical challenges, and as you do so, I urge you to keep in mind what the technical protocol we define today may mean for the our governance structures and processes in the future.
Second, we need to think about how our work can empower citizens to act politically, too. As we saw earlier, citizens are already thinking politically; we need to make it easier for them to act on their convictions. We need to design targeted, effective feedback loops between citizens and government.
One platform which does this very well is POPVOX, which was founded by my friend Marci Harris. Citizens log on and identify the issues that are important to them, and POPVOX lets them know when relevant policy conversations or legislation are happening and provides them a channel to share their views with their elected representatives. It enables citizens to participate in the democratic process at the specific moments when their voices will have the greatest impact. As you can see from the testimonials, users are happy: “I know that when I express my opinion on an issue, my legislators will receive it in a timely manner, not as a junk mail ‘petition,’ but as a relevant communication from a verified constituent.”
Going back to the CODE apps, one of them asked: How can I make healthier food choices in my neighbourhood? To this community, I ask: What data can we provide citizens to contextualize the answers they get? Perhaps we can provide data about how the national meat, dairy, and egg lobbies have been able to influence Canada’s Food Guide to increase the recommended servings of their industry’s products. Or we can provide a way for citizens to generate their own data so that the choices they get aren’t just those from Starbucks, KFC, White Spot, and other Big Food corporations?
Finally, as we push for open data to be a priority, we need to act with empathy for governments. As Demond Drummer said, “We need to help technologists understand the slow, lumbering process of democracy.”
I recently had a conversation with a Treasury Board employee who was frustrated because he felt that his hard, day-to-day work to advance open data had been overshadowed by recent controversies over the cancellation of the compulsory long-form census. While he was sympathetic to the criticisms, he noted that the decision was above his pay-grade. Getting criticized for something that was out of his hands felt unfair, and impacted his motivation to work on the issues that were within his control.
When I first started in this field, I often assumed malintent when a policy wasn’t properly implemented. I thought that politicians just issued nice-sounding statements to gain political support. Over the years, I’ve learned that while this is sometimes the case, the implementation gap can often be traced back to poor planning. Many implementing officials or agencies are left holding the bag when they never received sufficient political cover, budget, or human capacity.
This perspective has given me more empathy for government. Deeply understanding how government works is important for aligning our work with strategic priorities, and for designing new avenues to accelerate open data that are both creative and feasible. In our work with the Government of Mexico, we’ve seen our counterparts very successfully wield the Open Government Partnership to establish the political authority for many innovative initiatives. It provides both incentive and cover for civil servants to experiment. We’ve used ethnographic research to ground our evaluation and advisory work. Doing so helped us identify the ‘sweet spots’ of political significance for a new initiative, integrate with existing bureaucratic timelines and processes, and navigate channels for institutional change both formal and informal.
It has been an honor to be here at the Open Data Summit, and it is clear that the value of this community is its power to organize around a shared vision. There has been robust, successful action around technical goals. It is time to tackle our political aspirations and to reconcile the concerns we have as citizens with the work we are doing as professionals.
Photo: Flickr user nicmcphee
A couple of years ago, I had a moment of crisis about the role of design in tackling the challenges of our time.
My firm had been asked to take on a project supporting the democratic transition in Libya. After 42 years of autocratic rule, citizens could finally vote, and our task was to help the transitional government develop a system to register and manage voters. Libya is a geographically vast country, with a diversity of ethnicities and tribes, as well as an estimated 800,000 citizens living abroad, and the project was sure to pose compelling design and development challenges. While development firms working in these kinds of environments have been criticized for “parachuting in” to drop off a generic, pre-designed technology product, that wasn’t an option for us. First of all, no one had ever created a mobile voter registration system before. Second, we wanted to show what a truly human-centered, contextually-grounded approach could accomplish.
Still, the decision to accept the work kept me up at night. I was worried that we might end up doing more harm than good by taking the project.
I was trained in the private sector, but for nearly a decade now I’ve been part of a growing community in international development seeking to use design to tackle the world’s most difficult problems. For example, in response to the interest expressed by international organizations and donors, including Melinda Gates, nearly every major commercial design consultancy has launched a “social innovation” arm, including Ideo and Frog, while the strategy consultancy Dalberg has just launched a new design practice. This community is taking the tools that corporations have used for decades to create products and services that people want and applying those to the public space to create the products and services, like medical care or access to education, that people desperately need.
Yet I’ve noticed in this community a growing nervousness about how much design can create positive change. At conferences, online, and in private conversations, my colleagues are wrestling with the ways that “design for development” is falling short of its promises.
This is not surprising, given how fundamentally different the dynamics of the development space are compared to those of the commercial world. For one thing, in functioning markets, the user (aka, the customer) is powerful because she has money to spend. The users of a development program are often marginalized and powerless, with no money or voice to compel governments to listen to them.
And where commercial projects have a clear idea of the user and clear measures of success (e.g., widgets sold, conversation rate, or plain old revenue—a great organizing principle), public sector design projects have no set “bottom line.” How do you define “improved governance”? Is it stronger rule of law? Reduced violence and crime? Better public services? It depends on who you ask, and there are often complex politics and different interests at stake.
What’s more, the world’s most intractable problems are deeply rooted in massive systems, while design is a discipline focused on the edges. Traditional design focuses on creating and improving society’s outputs and interactions, such as a sleeker mobile phone or a more efficient way to buy coffee. When these skills are translated over to the public sphere, design still tends to focus on outputs instead of the real systemic problems. We create apps to help students study for their SATs, but deep down we know the education system isn’t investing enough in schools in poor neighborhoods. We design websites to help citizens surface ideas to their governments, but we know the heavy hand of corporations in politics prevents these ideas from actually getting used. These projects pursue admirable goals, but because they’re focused on the edges, they’re only making incremental improvements in a time when we need fundamental change.
We’re facing ocean-sized problems armed with teaspoons.
I’ve struggled with these questions for much of my career. In our work at Reboot, we know that design practices are only one tool in our toolkit. Our work has also drawn heavily from other fields, complementing the strengths of design with those of ethnography, economics, political science, and the development field. Our designers are humble and recognize how much we have to learn from these disciplines, which have wrestled with the world’s thorniest problems for decades.
But there’s another deeper concern, one that caused many a sleepless night around the elections project in Libya: With our good intentions, our human-centered ethos, and our appropriate tech solutions, would we create a veneer of good will that distracts attention from, and maybe even perpetuates, the global systems of injustice?
Because let’s face it: Even when well-spent, aid money can be a way of sweeping real solutions under the rug. It’s fairly cheap and easy for rich countries to disburse aid, compared to the effort and expense of arriving at the difficult political bargains on migration, climate change, trade, and other issues that would change lives in more meaningful ways.
Worse: In conflict situations, aid can be a part of the cycle of violence. When Libya collapsed, there were 20 million arms in non-state hands—in a country of just 6.2 million people. Those arms were sold to Gaddafi’s forces by Western countries, including the UK and Italy. We were complicit in the country’s destruction, a complicity that tends to be overlooked at United Nations convenings and expert panels on countering weapons proliferation.
Despite these concerns, I believe that international development is still one important path to progress. I believe we have a responsibility to take action however we can to alleviate injustice and poverty around the world. And I believe that designers have much to offer in the larger collective effort to push forward social and economic progress.
Reboot took the project in Libya. We spent seven months there, through periods of conflict, to keep us responsive to Libya’s dynamic political situation as we worked to meet tight deadlines. Objectively, the project was a success: We created the world’s first mobile-based voter registration system, which millions of Libyans used to register at home and from abroad. Our system also helped the Libyan government increase the sophistication with which it managed its elections across the country. It continues to be used in Libya today.
But in this work, we also had to recognize that creating a voter registration system was just us tinkering at the edges of the large, complex, decades-long process of developing a legitimate democracy. We had to be conscious of our role in this larger system, so that even when we disagreed with our Libyan counterparts, we were there to advise, not impose. This, after all, was about change on their terms and timelines, not ours.
As designers, it’s easy to “know the right solution.” (All the user research we did! All the systems maps! All the A/B testing!) But democracy requires people driving for themselves. And when we were able to set aside our “evidence-based design decisions” and our desire to “drive positive change” as we defined these terms, we noticed a shift in what our work was serving.
More important than our “solution” to the design problem at hand were the conversations enabled by the co-design process we’d undertaken with our Libyan counterparts. As we explored technical questions about the voter management system—what data security protocol to use, where information should flow, what analytics to track—we were creating the space for dialogue about larger questions of governance. These were questions about a government’s responsibilities to its citizens and the extent to which a state should invest to fulfill these responsibilities; we were, in short, discussing what it means to realize a renegotiated social contract among Libya’s population and governing institutions.
Many of the new government officials had participated in the overthrow of Gaddafi; as we worked together on our small contribution to post-revolution Libya—helping citizens participate in elections—we were facilitating a conversation about what it means not just to revolt against a dictator, but to govern in a just, inclusive way.
Designers have great power: We can “nudge” people to behave differently. Collectively, if more of us are working toward positive change, the impact of a million little nudges in the right direction has immense potential. The flip side of this is that those actions we don’t take have an impact, too. If we are passive, and fail to actively engage in our work with the larger systems and goals in mind, we’ll be (at best) complicit in the social injustices around us.
But if we’re willing to tackle the thorny problems, to get involved in messy policy and political debates, and to go head-to-head with organizations and interests that would prefer we didn’t ask the tough questions, designers can be part of larger solutions.
Each of us may only have a teaspoon. But if we’re all scooping in the right direction, maybe we can start to make some waves.
This article was originally published on Fast Company’s Co.Exist channel on May 14, 2015.
Inspiring commentary about the potential economic and civic benefits of open data is everywhere these days. I’ve seen the momentum building here in Nigeria, especially spurred by the government’s first Open Data Development Initiative, launched in early 2014 and facilitated by The World Bank. Additional investments by international donors have continued supporting the country’s growing community of open data enthusiasts, and the frequency of data-related workshops and hackathons has jumped from virtually zero to periodic events in Lagos, Abuja, Benin, and more.
But there’s a major roadblock to the realization of many open data initiatives in Nigeria: A dearth of high-value data.
There is no single definition of “high-value data,” but roughly, it’s information that makes government spending, enforcement, policy, or other practices transparent and responsive. Since responsiveness is one of the most important goals of open data, citizens largely define the “value” of any given data. As a good rule of thumb: If citizens aren’t convinced, the data isn’t high value.
Through open data events, often targeting civil society and other “demand-side” players, funders hope to catalyze the development of profitable, data-driven civic apps. But we, the practitioners on the ground, often cannot find the up-to-date, trustworthy data needed to create useful applications for journalism, advocacy, or development.
Realizing the lofty visions of open data is difficult no matter where you are, but it’s especially hard in a place like Nigeria: home to a nascent open source community, with a highly-politicized election happening tomorrow after a six-week postponement, and where government statistics suggest as few as five percent of the population consistently accesses the Internet. The large, information-rich datasets that traditionally comprise “open data,” often created by governments, are few and far between in Nigeria. Despite the government’s efforts, many datasets that do exist are irrelevant, outdated, incomplete, or mistrusted by citizens.
I witnessed an illustration of this problem after a high-profile model school construction initiative launched in Rivers State. With the school projects nearing completion, the state Ministry of Education set out to share the results through a public website, showing how many model schools had been build to impressive international standards. But many people were suspicious. One local advocacy organization, the Niger Delta Citizens and Budget Platform, questioned how many of the schools had actually been built to specification, and were in use. Rather than increasing transparency as intended, the website threatened to exacerbate mistrust between government and citizens.
In partnership with The World Bank, Reboot supported the Niger Delta Citizens and Budget Platform to go into the field and gather a new set of high-value data, and in doing so, open a channel of civic discourse with the Ministry of Education.
Using Formhub, an Android phone-based data collection tool, we investigated a representative sample of the more than 200 public primary school construction projects in five Local Government Areas. Surveyors visited school locations and asked basic questions about each, starting with whether or not the school actually existed. They also collected data also on the quality of construction (such as the condition of the roof) and the local community (such as whether it was rural or urban).
Despite a history of antagonism with the Ministry of Education, the advocacy group was able to use the new, accurate, up-to-date dataset to engage government officials in a constructive way, increasing their voice in decisions about education spending. With support from Reboot (you can read a case study of our year-long engagement here), the organization synthesized their experiences in the field into valuable insights, offering compelling results to state decision-makers. For example, they were able to discuss not only whether the schools were built as planned, but whether trust in the government had shifted as a result.
The Ministry was impressed, and the project was the start of an ongoing conversation about the allocation of public resources. By facilitating productive communication between citizens and their government, this project made open government data matter. Our partners managed to deliver on the promise of open data despite the initial lack of high-value data.
The data collection tool itself was one key to the project’s success. Formhub was a good choice given the constraints of field work in Rivers State. Most critically, Formhub doesn’t require a consistent Internet connection; the data is stored locally and uploaded the next time a connection is made. The app is designed to digitize data at the point of collection, which is when it’s most likely to be accurate. It also means that surveyors feel more responsibility for data quality, as opposed to past approaches, which relied on answers written on sheets of paper and handed off (along with half of the responsibility and most of the ownership) for data entry by someone else. When surveyors have final say over the entered data, they are also better data collectors, able to think critically about the use of the data they collect.
Formhub might be a good fit for other service monitoring or advocacy projects: A project of the Modi Research Group at Columbia University’s Earth Institute, Formhub was developed for use in the field here in Nigeria, in a way that very much reflects Reboot’s own values: It was designed iteratively, for usage specifically in resource-constrained environments. There’s also a strong developer community around the open source tool, including people who are eager to give tips on survey design.
Formhub’s continued development and applications are exciting; just recently, the Nigerian Office for Millenium Development Goals launched a website to capture and display data showing Nigeria’s progress toward achieving development goals at the nation, region, and state-level. Browsing through the site shows the full extent of the tool’s capabilities.
The Millenium Development Goals website is a solid step toward generating high-value data. However, it remains to be seen whether the government has the resources to continue updating and maintaining the dataset.
As Nigeria continues to build its national open data movement, it’s vital to recognize that “open data” will never be a one-size-fits-all solution. And as the international development community plans for “demand-side” workshops for training civil society and journalists on how to use open data, we should start by understanding how they are using data now—or not—and why.
Civil society and journalists play an essential role in raising data standards. The Rivers State model school example shows that service monitoring projects can raise the bar for higher-quality, more relevant data, even where the relationship between government and civil society is characterized by mistrust, and initial data is flawed. If we focus on data as a means to encourage constructive dialogue with government officials, each step we take can bring us closer to a truly open data culture at the national level.
A version of this article appeared on the Sunlight Foundation’s OpenGov Voices blog on April 13, 2015.
The World Bank’s 2015 World Development Report focuses on understanding real human choices (as opposed to the “rational decision-makers” of traditional economics) in a way that’s often critically missing from the development discourse. I was pleased to see that focus, even though the report also had some crucial gaps and showed that our field still has farther to go in blending methods like behavioral economics and design, as I analyzed in a prior blog post.
Consideration of these gaps leads to another, related problem: The continued dominance of economics over politics in development thinking. This bias is not unique to the World Development Report, but rather a shortcoming of the sector at large. However, this particular report shows it glaringly.
The dominance of economics would almost make sense if we defined “development” in narrowly economic terms. Fortunately, the sector has long recognized the multidimensional nature of development, where non-economic factors like health outcomes, clean air, and human freedoms are just as critical as GDP growth.
However, this progress still isn’t fully reflected in benchmark documents like the World Development Report, which approaches every topic from an economics perspective. For example, field experiments by economists are used to justify cultural insights or the existence of altruism. Meanwhile, the psychological impact of poverty must be explained as a “cognitive tax”—as if the idea of cognitive overload needs a public finances framing.
In contrast, ethnography is mentioned only about six times in the entire 200-page report. The two-page “spotlight” on ethnography does little more than assert the simple importance of cultural and social norms, without discussing ethnographic methods that practitioners should consider or the ways findings from ethnographies should inform decisions. This is more than a mere dispute on types of evidence or approaches to analysis. A narrow disciplinary focus restricts our thinking on which problems are important.
This World Development Report is multidisciplinary, in its own way. By adding “behavioral” in front of “economics,” it somewhat expands the methodological toolkit. But the economics are still dominant. This is the narrowest form of multidisciplinarity, which takes a single discipline as a starting point and judges the others on its terms.
The sector is increasingly recognizing the importance of politics to development, yet political thinking is missing from the World Development Report. The few instances where political issues are discussed serve to highlight the extent of this blind spot, as ostensibly political topics get a conspicuously apolitical treatment.
For example, the World Development Report describes how electronic voting in Brazil enfranchised poorly educated citizens who had struggled to deal with the previous paper ballots. This had the effect of increasing the power of the political Left, resulting in more funding for health services over time. The system’s designers themselves were surprised by this outcome. The report’s analysis fails to acknowledge that a similar effort in most contexts would meet with opposition, especially if it were explicitly intended to make voting easier for a particular set of voters. This fact is particularly close to home, as voter identification criteria in the United States fall into exactly this political trap.
In another section, the report explains how corruption is a social norm in some contexts, and uses this fact as the basis for a discussion about changing the norm. Yet the political nature of corruption and its relationship to power structures is unaddressed. In many contexts, corruption—defined as the use of public office for private gain—is a key pillar in the political system that protects incumbent powers. Corruption can even have positive effects, as it may build political stability in a regime and distribute the rents of the state throughout the middle- and lower-ranks of public officials. We saw many of these effects in our research in Nigeria. Something similar can occur with forms of clientelism. The report is unable to challenge its own normative framing, drawn from economics, that corruption is inherently negative, and so misses the political nature of the phenomenon
These kinds of omissions appear again and again. A chapter on climate change reads more like a guide to behavioral economics for advocacy groups, with insights on how to use framing to account for biases and build support. There is sound advice here, but these insights are trivial compared to the political and economic interests aligned against meaningful climate action. The report makes no mention of oil companies wielding concentrated wealth to shape public policy, or of the conflict between developed and emerging countries over who will shoulder the burden of reducing future emissions.
Development sector professionals have an amazing ability to wish politics away in our analysis and action. Despite the importance of interest groups and contested space to historical outcomes, we are reluctant to include these political factors when thinking about our own work in the present. We have to work harder to see these blind spots. Any analysis of promoting change or sector learning that ignores these factors impoverishes itself.
The report’s lack of political nuance is intertwined with a thread of paternalism that runs through any policy application of behavioral economics: the aim is to shape the choices of people, especially poor people. This paternalism is less of a concern in contexts where those being “nudged” have a way to hold the “nudgers” accountable. For example, the British government’s “Behavioural Insights Team” is indirectly accountable to the voters themselves, via their political representatives. In contrast, an international development organization seeking to influence citizens in poor countries faces no such accountability.
The result is a flawed framework that recognizes human irrationality but not human agency. It conceptualizes development as something done to individuals and communities, rather than with or by them. Their role is to be nudged into better development outcomes, in spite of their own imperfect decision-making. This paternalism is ethically flawed; the fact that it often fails to achieve development outcomes only adds to the case against it.
As we expand our analytical toolkit beyond economics and deepen our understanding of the nuances of human choice, we also need to expand our thinking on the role that individuals play in development outcomes. Analytically recognizing the role of politics is just one step. The corollary to understanding politics is recognizing power. We have much further to go, to programmatically allow for the agency and power of individuals to drive their own development.
Every year, the World Bank’s World Development Report offers a detailed look at the state of knowledge around a single development topic. While recent reports focused on topics like jobs, risk, or gender equity, this year’s edition, titled Mind, Society, and Behavior, reflects the growing attention being given to human decision-making in development. Among wonks, it has been called the “behavioral economics World Development Report.” In reality, it is broader than that, yet narrower than it could be.
The report is framed in three dimensions: Thinking automatically, Thinking socially, and Thinking with mental models. Thinking automatically articulates how heuristics and decision-making shortcuts can lead us to suboptimal choices, drawing on the research of Daniel Kahneman and others. Thinking socially discusses the impact of social norms, expectations, and cooperation in our decision-making. Finally, thinking with mental models refers to the framing and categories we bring to our decisions.
Through this framework, the report establishes and explores the idea that humans are not always coldly rational decision-makers. Economists often present this idea as if it were surprising and slightly disappointing, but anyone who works with real humans knows that we are nuanced creatures. Our individual choices, in all their complexity, involve shortcuts and social dimensions that we’re only starting to understand.
This is critical to our sector because the outcomes of individual choices have social ramifications. This fact often gets lost or glossed over, subordinated to the policies, markets, or historical forces that are considered more consequential to development outcomes. With Mind, Society, and Behavior, the World Bank presents the case for elevating individuals’ decisions to the same level of importance.
Here at Reboot, we often use the term “human-centered” to describe our approach, especially to design. The World Development Report focuses on the discipline of behavioral economics, another “human-centered” practice. Both design and behavioral economics are increasingly popular approaches for recognizing the role of human choices at the crux of many development problems. The relationship between these two methods is still developing, and there are still unresolved differences.
There are important connections between these disciplines. The insights of behavioral economics can inform design choices. The World Development Report offers a few of these insights, noting the accumulating evidence in favor of specific practices, such as regular text message reminders to promote savings. There is also evidence for a few broader principles, such as behavior change through “social proof” approaches—i.e. emphasizing that other people are engaging in a desirable behavior such as voting, or paying taxes. Program or service designers can use these insights as heuristics of their own.
However, the two methods are not in complete alignment. Behavioral economics has a tendency to universalize, drawing insights from a particular context and applying them broadly. This is powerful but dangerous. It risks encouraging “cookie-cutter” development thinking, where solutions are applied without adaptation to context. In contrast, human-centered design has an incredible sensitivity to specific needs in a given use-case. The risk here is zooming in too close, focusing on a design’s minutiae at the expense of the needs it’s meant to serve. How many times have we seen impractical, expensive, or over-engineered solutions from designers whose focus on the design itself (and on good reviews on social enterprise blogs) sets up blinders that obscure the real people who are meant to benefit?
This tension doesn’t mean the methods are in opposition. In fact, the combination of the two could be formidable. To see how that could happen, let’s turn back to the World Development Report.
The final chapters of Mind, Society, and Behavior pivot to a topic close to my heart: adaptation in design and programs. It proposes an iterative program cycle, incorporating redefinition and rediagnosis throughout implementation (see diagram). This is a welcome emphasis on continual learning. The report further ties this process to an analysis of bias among development professionals, who struggle to understand the individuals they are meant to serve.
Unfortunately, the report’s recommendations on this front fall short. These include simplistic solutions: For navigating the complexity of human motivations, the report suggests applying more social and political analysis upfront. To combat confirmation bias by development practitioners, use red teams or double-blind peer review. To understand the importance of context, implementers should engage in service trials. While these proposed solutions move us in the right direction, they feel shallow.
At their core, these proposals revolve around improving the knowledge of the interveners. That’s a worthy goal. But it starts from the perspective that development interventions are done by development organizations to beneficiaries and communities. It focuses on extracting knowledge from the end-users of development products and services so that knowledge can improve decisions made by others. Why not bring those end-users into the decision-making process? We need a deeper reexamination of the role of development practitioners.
Participatory methods and co-creation alongside users are not new to the development sector, yet they’re absent from this World Development Report. More meaningful solutions to the challenges raised would include hiring diverse teams with national staff who understand the context; giving autonomy to “development entrepreneurs” to work adaptively; increasing the range of stakeholders involved in program design and giving them feedback channels throughout implementation; or even reducing the power of experts at large institutions in the sector. These approaches can provide a useful counterbalance to the purely technical knowledge of experts, yet none of these are discussed in the report.
This is where behavioral economics and design can come together. Participatory design methods bring the end-users into the decisions about services and products that will serve them. Working with locally led teams of development practitioners, those end-users can interpret the insights from behavioral economics and adapt them to the context. In fact, in light of the biases among international development experts, those end-users are the best positioned to do this.
A further iteration on this confluence of methods involves using the field experiments popularized in behavioral economics as part of the iterative design process. For simple messaging initiatives at a large scale, such as behavior change communications, this is not so different from the A/B testing long used by marketers. For more complicated products and services, general insights from behavioral economics can influence designs, which are then subjected to in-context experimentation to ensure they are appropriate and meaningful to the users. The findings from these experiments should again be available to those same end-users who helped with design, continuing their role in iterative interpretation and design.
The World Development Report’s discussion of adaptation is why I say that this report is broader than just behavioral economics. However, the missing components of design and participation mean that the analysis is narrower than it could be.
Even still, the report moves the discussion forward. These ideas align well with the Doing Development Differently conversations and recent work like the Overseas Development Institute’s “Adapting development” report. Smart people at the World Bank and other organizations are carving out ways to do adaptive implementation in spite of institutional constraints, such as procurement or funding rules. Others are finding ways to shift those constraints altogether.
These efforts are pushing the sector toward better thinking and practice. Recognizing the importance of human biases and choices means bringing in a variety of human-centered analytical methods, such as behavioral economics and design, as well as changing our management, funding, and organizational practices to be more adaptive. This will manifest itself in many different ways across the sector. It’s up to all development practitioners to accept the challenge of turning a critical lens on our own work.
While conducting research in rural Nigeria last year, I met Ester, a young mother who told me about her last visit to the local health clinic. After receiving a malaria treatment the government had advertised as free, she was charged the equivalent of her whole week’s pay. I asked what she did about it. “Nothing,” she said. “What would I do? Who would I tell?”
This was not the first time I heard this reason for not reporting feedback on a negative health care experience. Ester’s voice was part of a larger chorus that our research team heard often when speaking with patients in Wamba, a mountainous region in central Nigeria.
To address this communication gap between patients and services providers, Reboot, in partnership with the World Bank, the Nigerian government, and Caktus group, developed an SMS-based program to collect and act on citizen feedback regarding the quality of primary health care services. The pilot design, known as My Voice, would enable patient experience to directly inform improvements to public healthcare, supplementing Nigeria’s performance-based financing initiative for national healthcare reform.
Interviews with Ester and other Wamba citizens highlighted just how uncommon the practice of formal healthcare feedback was. Even the word “feedback” was unfamiliar. This meant that simply developing a technology platform for patients to communicate with healthcare providers wouldn’t be sufficient. We needed to design a program capable of empowering patients to offer feedback and motivating providers to make tangible improvements based on patient comments.
Projects like My Voice, that engage citizens through text messages to improve public services, are now a trending focus for development donors and practitioners. It’s not hard to see why: it’s new, economical, and potentially inclusive of the world’s 3.5 billion+ mobile subscribers, 78 percent of whom are in developing countries. Connecting the people who use public services directly to the people who provide them, via text message, can improve services while establishing powerful new accountability mechanisms.
But sometimes even the best intentioned programs don’t yield the results investors and developers expected. Sometimes citizens aren’t willing to participate, or systems aren’t designed to encourage participation. Sometimes the data collected is not relevant or useful to decision makers. Sometimes these platforms are built as silo operations, failing to integrate across existing real-time data analysis efforts or with national programmatic processes. And all too often, these projects celebrate citizen engagement volume through flashy websites, making scale and replication seem easy.
For these initiatives to work for actors across a service delivery chain for the long haul, citizens need to trust that their inputs will be heard and will make a difference. Decision makers need to trust that feedback has a constructive use and will not threaten their careers or livelihoods. As the program is created, communities and stakeholders need to trust that the design process has listened to them—that the final product will reflect their needs, and will work given all of the contextual realities (like intermittent mobile networks) international consultant teams may overlook.
My Voice introduced an entirely new practice for patients and service providers in rural Nigeria—not only in the technology used to send and receive surveys by SMS, but even more fundamentally in the basic act of providing and using feedback. Keeping that in mind, two core principles drove our approach for this project:
We knew the design of My Voice would only ever be as good as our understanding of its users and their environment. This led us to conduct an immersive research process, including in-depth, ethnographic interviewing, service trials, site observation, and embeds with service providers.
For the six-month duration of the project, our team established a temporary office in Wamba town and hired a local research team to help guide each stage of the process. We visited all of the participating health clinics multiple times each week by motorbike. We split time between the State and Federal capital cities, where we worked alongside the policy makers and program managers responsible for the primary health care system. We dealt with the same intermittent cell coverage and slow Internet access Wamba residents do. And we relied on the nearest health clinic for malaria treatment, and the occasional food poisoning incident.
This immersion gave us a clearer, closer view of health infrastructure and processes, and a deeper understanding of the needs, habits, and interests of the people who would use and operate the feedback system. This immersion allowed us to participate in key decision-making meetings, surfacing opportunities for how My Voice real-time data could be included in program improvement discussions. Our research and design process required close collaboration with national health experts, local-level management, and the World Bank team, who also joined us in Wamba for co-design missions throughout.
Users don’t exist in a vacuum; we made sure to research and design for much more than the individual user. We studied the context that My Voice would function in, the institutions that would manage and use it, and a slew of political, cultural, situational, and environmental elements that would play a role in the project’s sustainability. As a result, we were able to design not only a technological platform, but an entire program for My Voice to integrate within health clinic and government assessment and decision-making processes.
The My Voice program design included training for staff on interpreting and using patient feedback; transition of program management to local government staff; strategies for integrating My Voice into national healthcare programs; and a tailored brand identity, promotional materials, user guides, and messaging campaign.
Over time, we developed relationships with the clinic managers, healthcare workers, local religious leaders, and local government staff. During meals or in their offices, we listened to their inputs and used them to tweak the system, gradually iterating and improving. While initially skeptical, the community ultimately understood that My Voice was for them: built for their interests using their own ideas. For example, they determined what information it would include, when feedback would be received, and what reports would look like. Throughout the process we saw ourselves as idea facilitators rather than generators—weaving together and realizing the ideas of the extended Wamba healthcare community.
In-depth research, living in Wamba, including stakeholders in the shaping of the design, building a program to support the technical design: all of this wasn’t just good design practice. Rather, these intentional investments sent messages to citizens and government alike that we were committed to designing a service that worked for Wamba’s patients, clinic managers, and state-level decision makers. These signals began to establish new lines of trust—a vital step before people would be willing to take part in this new form of communicating and problem-solving. And, while all of that was important for short-term buy-in, we also ensured that My Voice’s open source platform could be used for the long term and could be easily adapted for use in other sectors, a small step towards addressing the fragmented data collection landscape.
All of these micro design decisions began to add up, so that for the first time in Wamba, patients began sharing feedback on their experiences in rural health clinic visits. Patients reported which clinics were closed when they needed emergency care, or when they didn’t understand their diagnosis or cost of treatment. Staff—who at first worried that patient feedback would be harmful—began to welcome and appreciate comments from their patients.
Not only was a new channel for dialogue built through My Voice, but clinics started using the channel to make incremental improvements to health care, based on what patients were saying. And for the first time, service providers responded to patient comments by clarifying their payment processes, checking on staff more frequently, and keeping facilities open during nights and weekends. At Reboot, we believe that this change is pretty significant, and that it is only possible because the design was built with the people using it and in the institutional ecosystem where it would function.
We’re now working with the World Bank and the Nigerian government to see how My Voice can be adapted for more states, applied to other sectors, or further harmonized with existing data platforms. Together, we’re exploring how to scale the trust and sense of ownership we worked so hard with the Wamba community to build.
“Multi-stakeholder approaches,” “participatory development,” and “design with the user” are increasingly popular concepts in global development. As an organization founded on the belief that citizens should have a greater say in the policies that affect their lives, we at Reboot should be heartened by this momentum towards greater collaboration with “users” and “beneficiaries.”
But too often, we’ve seen co-creation done poorly. Many organizations have recognized the importance of collaborating with the diverse stakeholders who will be touched by the policies or products they develop, but rhetoric rarely matches reality. Co-creation is hard. With more voices in the room, the process is slower and more complex; it can seem impractical. And let’s be honest: co-creation decreases the influence of powerful actors in shaping outcomes. The development industry often lacks both incentives and mechanisms to co-create, and as a result, it isn’t often done well.
These concerns were at the top of our minds when we learned that USAID and Sida—the American and Swedish international development agencies—wanted to convene over 60 people from 50 organizations to “co-create” an ambitious new program to support and strengthen civil society around the world.
Globally, advancing social justice and human development often relies on local civil society organizations. Yet the right to meet, organize, and drive change through civic action is facing backlash. Since 2012, the International Center for Non-Profit Law has documented more than 50 countries seeking to ban or constrain civil society activity.
USAID and Sida are two of the founding partners around a new initiative—launched as part of President Obama’s global call to Stand with Civil Society—that aims to combat this growing repression, expand civic space, and strengthen civil society. They plan to do this by developing a network of regional civil society hubs, each tailored to the goals and needs of civil society communities in that region.
In a laudable demonstration of donor humility, USAID and Sida admitted at the outset of our collaboration that they didn’t know the best way to create these hubs. This honesty created the conditions for a true co-creation process. Rather than designing a program from the top down then validating it through consultation, USAID and Sida aspired to work with civil society actors as true partners.
And so they issued a global call for ideas on ways to support and strengthen civil society. Over 200 organizations shared their ideas, and over 40 were invited to a co-creation workshop in November 2014 in Istanbul to set the ethos and foundation of a new initiative. Reboot and CIVICUS, a global alliance of civil society organizations, were asked to join, both as participants and to design and facilitate the workshop.
At its heart, co-creation is about bringing people together to develop solutions to a common challenge. While this sounds straightforward, what makes it tough is that actors almost always have different perspectives on the challenge, different levels of experience addressing it, and different interests and motivations for engaging in the work.
Power imbalances within the group make working through these differences towards constructive solutions all the more difficult. While power and politics naturally plays into any group’s dynamic, facilitators must carefully navigate these imbalances to bring forward each individual’s perspective and expertise.
Too often, exercises billed as co-creation fail to live up to their stated values of inclusivity, leaving repeat participants wary of such “consultation as insultation” processes. Understanding this, it was no surprise to hear participants at the start of the Istanbul workshop wondering aloud whether the donors had brought a plan to be rubber-stamped, and if the workshop was simply a political box to be checked.
Crafting effective co-creation is much like designing any program, service, or product. “Just do a workshop” is a short-sighted mistake. A thorough process and strong user experience design are critical. Our team worked closely with USAID, Sida, and CIVICUS before and after the three-day Istanbul event to optimize participant experience and, in doing so, harness their expertise into productive outcomes. Recognizing the unique dynamics of this design exercise, we relied on a set of core principles to guide our work.1
Break out of established roles and mindsets. As with most such gatherings, there were power imbalances within our group, which included both donors and grantees, as well as representatives from the global North and South. To break with traditional hierarchies, we had to force participants out of familiar roles and mindsets.
To do so, we first worked to understand the interests, experiences, and expectations of each co-creator. (This was especially critical given that many civil society actors, while they may be ideological allies, are commercial rivals, competing for a pool of limited resources to do their work.) What does a human trafficking activist in the Philippines and a freedom of information lawyer in Georgia have in common? On the surface, not a lot. But throughout the process, we asked participants to bring their individual experience to the fore, rather than calling on them as representatives of their organizations. By asking participants to recognize the respect for human rights that unified us all, they were able to shed their “organizational hat” (and the associated pressures) and collaborate towards a common vision.
We framed conversations to draw on the experiences of less-privileged voices, and asked the more-powerful actors to be transparent around their interests and resources. Donors, for example, had to answer sometimes uncomfortable questions about organizational politics and funding that may impact the initiative. They were also asked to be highly sensitive of their influence in group settings, and to participate by asking clarifying questions rather than offering opinions that might overly sway the conversation.
Define the “what” and allow creativity around the “how.” Facilitating a co-creation process is about articulating a vision, establishing the parameters, and guiding participants to a shared definition of what success looks like. It’s never about the specifics of execution—that’s up to the co-creators. And while we designed a detailed implementation plan with multiple possible paths, these were used as flexible scaffolding rather than fixed itinerary.
Around the “what,” we recognized that USAID and Sida had given us an intentionally broad mandate. And so, to focus our thinking and encourage rigor in both thought and action, we unpacked development buzzwords and fuzzwords to understand what each of us meant by terms like “increased impact” and “inclusive participation.” This primed us to be clear about what it was that we sought to achieve.
We asked participants to draw on their own experiences to develop success criteria that were familiar and tangible, rather than based on abstract principles or case studies. The group jointly aligned on a set of key “nuts and bolts” (e.g. service offerings, business model) that designs of the hubs should include. This gave participants categories and boundaries within which to design, while also providing leeway to create locally tailored content.
And we stayed flexible and adapted schedules and exercises as we went along. Because when you give 50-odd very opinionated people a big, hairy task, you need to be ready to seize the opportunities (and address the challenges) that come out of it.
Build an invested community of collaborators. Successful co-creation efforts are the work of a cohesive community, not a collection of individuals. Collaborators must build trust before tackling the technical challenge at hand.
We designed the co-creation process around anticipated human dynamics—such as past relationships or histories that may have caused reservations—seeking to first build unity, then “do the work.” Thoughtfully designed icebreakers, high-energy exercises, and social activities were critical for building community bonds. An open spaces session allowed participants to talk about whatever they wanted, even if it was outside the meeting’s scope. We monitored the human factor throughout and adjusted activities accordingly.
Some of this, understandably, worried the convenors—would the work get done in time? But by late the second day, when we had given the final co-creation assignment, participants were rearranging their evening plans and setting 7am breakfast meetings. Most of us wouldn’t do that with our colleagues or fellow workshop participants—we only invest in such a deep, personal way when we’re working alongside comrades.
Participants left Istanbul buzzing with levels of energy rarely seen after being crammed in a conference space with too many strangers and too little elbow room.2 Rich conversations continued online in the following months and have now led to the foundation of a truly innovative global initiative to support civil society.
At Reboot, we are proud to see our design and facilitation methods help mitigate conventional power structures, putting authority and ownership in the hands of users—in this case, activists, civil society actors, and their supporters advancing social justice around the world.
USAID, Sida, and several co-conspirators are now planning regional design processes, where this initiative will make more concrete decisions on how to support civil society innovation in each region. We look forward to updating you as it moves forward.
1: To dive further into the process we used to create the Istanbul workshop, see our briefing note Co-Creating the Civil Society Innovation Initiative: Process Journey from Idea to Design (PDF, 518KB)
Brought out in handcuffs, a defendant stands with his public defender before a judge. The prosecutor requests that bail be set at $500. The defendant has a warrant on his record—likely the result of a failure to appear in court—and so the non-profit that provides bail recommendations advises against releasing him. If the judge agrees with the prosecution’s $500 request, the defendant, a day laborer, won’t be able to afford it. He will be sent to Riker’s Island prison to await his trial date, a few days or even weeks away. He will lose his job for missing work. He will not be able to pick up his children from school or watch them in the evening. This man has not been found guilty of any crime, nor has he had a trial in front of his peers. Yet his life will be turned upside down by even the briefest stint in prison.
His alleged crime? Putting his feet up in a subway car.
While much of Reboot’s work takes place overseas, we see too many instances of injustice and abuse in our hometown, and we focus our pro-bono work here in New York City. In the past, we’ve worked with the domestic violence organization, Safe Horizons, to design communications materials to better reach trafficking victims. Right now, we are deep in a project with a new organization, the Brooklyn Community Bail Fund, aimed at designing an immediate solution for one of the most systemic economic inequalities in our courts.
For this project, we’ve spent several days viewing arraignments in King’s County Courthouse, and have seen many heart-rending stories of people unable to post bail. Studies from Human Rights Watch and others confirm that bail is a common source of inequity and discrimination in the criminal justice system. Most people who can’t afford bail end up pleading guilty, forgoing their right to a trial just to get out of prison and go home. The system needs to change, and many public defenders believe that the cash bail system should be abolished completely.
But on the way to reform, small nonprofit organizations like the Brooklyn Community Bail Fund are stepping in to give more defendants the benefit of release. This fund will post bail for people charged with low-level crimes, allowing them to continue working and caring for their families while awaiting trial. After watching the wheels of Brooklyn’s misdemeanor court turn, we know first-hand how much this program is needed.
Bail often punishes low income people for crimes they have not been found guilty of committing. A short time in jail can have massive repercussions for anyone, but especially for those living and working without a safety net, at the margins of society. Homeless people often lose shelter housing, for themselves and their families if they are unable to show up for an evening. People in low-paying and temporary jobs will often be fired if they fail to appear. Those receiving treatment for drug addiction may face serious health risks if a program is interrupted.
Bail is intended to ensure that people charged with crimes return to have their day in court. But in practice the system does just the opposite, often forcing people to plead guilty. The public defender organization Brooklyn Defender Services recently collected data from a sample of defendants in Brooklyn; in that group, an incredible 92 percent of defendants held in prison pled guilty, as opposed to 40 percent of those released on bail. The desire simply to return home is a powerful incentive to forgo one’s right to a trial.
Even if a defendant perseveres, simply having been unable to afford bail negatively affects the outcome of the case when it does go to trial. In the same group of defendants, only 38 percent of those held on bail received a favorable resolution, compared to 88 percent of defendants who were free leading up to the trial. The bail system clearly works against the basic tenet of innocent until proven guilty.
Inability to post bail even when the amount is very low affects thousands of New Yorkers every year. In 2008, the defendant was unable to afford bail in 87 percent of cases in which bail was $1,000 or less —this translates to over 15,000 New Yorkers held in prison for an average of 15 days before trial. The consequences extend to taxpayers as well: According to Human Rights Watch, the average daily cost for each incarcerated inmate is $400. Nationwide, the estimated cost of imprisoning people held on bail reaches $9 billion each year.
Fortunately, a growing number of bail funds are providing a temporary solution. Although the number of bail funds is still relatively few, they exist in several states, including New York, where a 2012 law sanctioned nonprofit bail funds for the first time. The success of these pioneering programs offers strong proof that we need more like them. One inspiration for the Brooklyn Community Bail Fund is the Bronx Freedom Fund, which boasts a 98 percent return rate for bail fund recipients. That’s higher than the rate of return for those released without bail.
Reboot has been studying the example of the Bronx Freedom Fund as we work to support the best possible design for Brooklyn’s fund. We are working closely with public defenders involved in the establishment of the fund, as well as the Board of Directors that will oversee it once established. Our mandate is to bring expertise in design research to help build a system that will reach the defendants who will benefit most—and one that administrators can manage sustainably.
Our project has required in-depth field research with the justice system. Observing arraignments in court improves our understanding of the people involved in a court case: What pressures are faced by public defenders, who are the front-line in serving clients? Interviews with public defenders reveal important opportunities: What can they teach us about making sure defendants show up in court—since they’ve been doing it for years? And interviews with people who have faced bail themselves help us understand how a bail fund can better serve people like them.
Bail is a serious impediment to justice in this country—one of many in a criminal justice system rife with discrimination and flaws. As New Yorkers, we’re hungry for change, from the NYPD to the courts. As designers, we know that reform requires empathy and listening to succeed. As we work with the Brooklyn Community Bail Fund, we’ll continue sharing insights here in support of design for a better justice system.
The international development sector has a history of expensive failures. Top-down planning, marginalizing local actors, transposing cookie-cutter solutions from other contexts, and short-term “band aid” solutions are all to blame for these projects’ lack of impact.
Against the backdrop of these failures, a number of projects are successfully creating real change. Even within the same systemic constraints, these projects have found ways of doing development differently and creating impact. They are positive deviants within the sector.
In October, Zack and I had a chance to take part in an event focused on those positive deviants. Hosted by the Overseas Development Institute and Harvard Kennedy School, the two-day workshop, “Doing Development Differently,” sought to build common understanding and a community of practice around a new way to engage in international development.
Learning from positive cases was central to the event. Over a dozen practitioners and policy makers presented focused case studies: seven and a half minutes, no slides, just the story. These case studies, complemented by the experience of participants, fed into broader sessions aimed at teasing out common principles.
In the weeks since, the conveners and participants have crafted a manifesto. It states that successful initiatives tend to follow common principles:
They focus on solving local problems that are debated, defined and refined by local people in an ongoing process.
They are legitimised at all levels (political, managerial and social), building ownership and momentum throughout the process to be “locally owned” in reality (not just on paper).
They work through local conveners who mobilise all those with a stake in progress (in both formal and informal coalitions and teams) to tackle common problems and introduce relevant change.
They blend design and implementation through rapid cycles of planning, action, reflection and revision (drawing on local knowledge, feedback and energy) to foster learning from both success and failure.
They manage risks by making “small bets:” pursuing activities with promise and dropping others.
They foster real results – real solutions to real problems that have real impact: they build trust, empower people and promote sustainability.
Reboot was happy to sign on to this manifesto because it ties so closely to how we already work. In fact, Zack presented our work on social accountability in Nigeria at the workshop, and our friend Natalia Adler from UNICEF talked about the project that we did with her team in Nicaragua. The manifesto also draws from a pioneering array of methods, experiences, and principles presented by other participants who shared their work, including the “problem-driven iterative adaptation” (PDIA) approach used by Matt Andrews and others at the Kennedy School’s Building State Capability program.
The community of practice resulting from the workshop is carrying the initiative forward. The event drew an influential group of 40 participants together, and facilitators managed to create more engaging discussions than you typically see at a policy workshop. Several people commented that they had been trying to drive change in their own institutions, and that this group finally made them feel that others were dealing with the same struggles. Oxfam’s Duncan Green called it “two mind-blowing days.” We left feeling invested in the next steps.
However, challenges lie ahead. First, this community of practice needs to grow. We need a larger community to infiltrate more development institutions and change policies as well as mindsets. Even more importantly: We need more diverse perspectives to better articulate these principles and develop a deeper understanding of what they mean in practice. The “Doing Development Differently” workshop skewed heavily Northern, with few voices from the global South in the room. It was also dominated by donors and consultants. If that lack of diversity continues, it will impoverish our ideas and our impact.
The need for diversity relates to another challenge: incorporating power and politics into these principles. The Northern/donor perspective at the workshop led us to frame issues from the standpoint of outsiders promoting or funding reform or direct services. The resulting principles call for more agency and leadership from local conveners. However, I suspect that the needed shift in relationships is more nuanced. True success will involve different types of approaches on the part of local actors; and all of these relationships are tied up in politics and the specific individuals involved. This is a thorny set of issues.
Finally, this community needs to present something truly new, useful, and impactful. One critique raised at the workshop, and readily acknowledged by the conveners, is that all of these ideas have been floating in the development sector for some time. This community of practice may be able to offer something unique if it can solidify and operationalize these principles in various development institutions (to this end, we discussed issues like procurement and human resource policies). But institutionalizing the principles carries a risk of watering them down, as often happens when issues or approaches are “mainstreamed.” Even the logframe, that favorite punching-bag for development reformers everywhere, started as a well-intentioned effort to improve planning.
Despite the challenges, I’m optimistic about these efforts. Later this week, I’ll be in Berlin for a meeting hosted by the World Bank and Germany’s aid agency GIZ that will build on the workshop and the manifesto. The next stage involves creating more robust case studies, beyond the seven-and-a-half-minute presentations from the workshop. We’ll seek to craft a case study methodology that incorporates local perspectives and lessons—and that captures lessons about emerging practices in an actionable way.
The work of changing institutions is hard, but we’re happy to help drive these efforts forward, in theory and in practice. Ultimately, doing development differently means ensuring that the “positive deviants” become the norm.
At Reboot, we take thousands of photos over the course of a project. We take pictures of people and their environments—homes, workplaces, possessions, and the list goes on. Photography has always been an important part of our research and data gathering process. Imagery serves as a critical visual tool, and one that helps foster empathy for those we are working with.
Imagery is also a key component of Reboot’s visual identity, as you may have deduced from this website. Images of people are especially powerful in revealing the details of the kind of work we do, the people and places we learn from, and the principles we stand for. But using someone’s likeness publicly—anywhere—means we need to do so in a respectful and responsible manner.
For anyone that has taken a passing look at many socially-oriented organizations, especially in the international development space, you know well that this is not always the case. “Poverty Porn” abounds on the promotional materials for everything from large NGOs, to small consultancies, to personal work portfolios and photographers’ websites. Big teary eyes, small tattered clothes, images of want and famine that pull on your heartstrings. Oh, it’s in Africa? Even better.
These are images that capitalize on viewers’ sympathetic or pitying emotional reactions, and which they use for a reason. Sympathy and pity are strong emotions, they prompt strong responses—all the better for drawing attention to your work and especially for fundraising. But what do they do for the people featured in the photos? They didn’t ask for our sympathy, they didn’t ask for our pity.
At Reboot, we work to empower and to enable. A visual world of sympathy and pity doesn’t sit well with us as people, and it surely doesn’t sit with the mission of our organization. So, we chose a different approach.
First and foremost, we recognize that as the individuals who document, process, and use images of other people we have responsibilities, namely:
This means we leave a lot of images on the cutting room floor. Out of the thousands of photos taken over the course of a project, only a very select few are seen by anyone outside of the Reboot team. In fact, when it comes to showing the rest of the world the work we do, out of those thousands, we are often limited to just a handful of photos to represent the months (if not years) of work that went into each project.
To ensure that we practice what we preach and live up to our responsibilities, we’ve implemented a system of ensuring that individuals are informed about how their images might be used, and asked for their permission to use their image in these ways. Those permissions or denials are then recorded and tagged in the photo’s metadata. Beyond seeking informed consent, we also defined the ways we should and shouldn’t use certain types of images, especially with regards to the appropriateness of using images of people.
Distinguishing between images that are used internally for research only and images that could be used externally—on the website, research reports, or elsewhere—proved helpful to realizing this system. Where a photo of a person is used to draw a direct connection to an individual, place, or context, using that image makes perfect sense. But where a photo of a person is used only to draw a connection to ‘corporate’ Reboot, this doesn’t fit well with our values. We need to be aware of the fact that we are essentially facilitating an introduction to these people through their imagery, and therefore must be more intentional about how we tell their story when we do use images of people in our corporate communications.
Addressing this relies on a combination of written copy, design elements, and photo choice. For example, on promotional postcards given out at events and info sessions, we incorporate a short sentence on the reverse side of the page that provides a brief description of that person and tells where to find out more about them (i.e. a link to a case study on our webpage). Doing this creates more context for understanding between the viewer and the individual shown, and also allows us to form stronger connections within our own work and materials.
In cases where the format doesn’t allow for us to add more information, such as a business card or other small canvas items, we shift to images of people where the individuals are more anonymous. In this way, we avoid establishing a false sense of empathy, where the viewer feels a strong connection to the person, but can’t learn more about them to truly understand their context.
Most importantly, we didn’t force this system out of thin air. Rather, it came about organically and sustainably as a product of our values. While more time and difficulty is added to taking, processing, and using our photos, the fact that we as an organization decided to do it this way engenders support among the team. Having these beliefs arise from our staff organically meant that on the whole, these guiding principles had already been informing our image choices all along, even before it was “official.” By formalizing these ideas, we made ourselves, our clients, and the communities we were working with a promise that we are keeping.
We officially put this approach into practice exactly one year ago and are still working backwards to update older materials to be in accordance. It’s not perfect yet, but we’re on the road. Our hope is that by sharing the lessons from our experience, we can encourage other mission-minded organizations to take a look at their own photo use policies and ensure that they practice what they preach.
A partial list of some of the resources we found helpful along the way:
Two interesting trends have recently been coming together in an exciting way: the push for open data, and the “Learn to Code” movement. Together, they show great promise for realizing the goals of open government, but this promise has yet to be fully realized.
Open data has increasingly become a way for governments to demonstrate their commitment to transparency and accountability. Calls to release government data have been heeded to varying degrees by local, provincial, and national governments in many countries. And open data is, in itself, no small feat. Releasing data in any capacity is often an immense hurdle, and one for which governments should be recognized.
But, as anyone who has ever downloaded spreadsheet upon spreadsheet of government data (or pored over printed table upon table in a government publication) can tell you, open data alone does not automatically equate to open government. Open government requires citizens and governments to interact with open data and transform it into something that can drive debate, advocacy, and accountability.
Two weeks ago, I had a chance to see the challenges of converting open data into open government firsthand in Indonesia. The Indonesian government has been pushing to increase the adoption of “e-procurement” nationwide as part of its open government strategy. In this system, companies that want to win government contracts must submit their bids through an online process, which facilitates monitoring. The government then publishes data on the open calls and winning contracts.
But even governments whose processes are largely computerized typically store data in formats that serve their purposes, not the needs of the citizen user. For example, when the government of Indonesia first began releasing data, it had to be downloaded by an individual procurement package. More importantly, the data in its raw format is not immediately meaningful to most citizens.
This is where the Jakarta-based Indonesia Corruption Watch (ICW) saw an opportunity. ICW works with the Indonesian procurement agency, LKPP, to consolidate their data on e-procurement on the website opentender.net. Visitors to the site can visualize data and search for contracts with specific characteristics. That would be valuable itself, but ICW went a step further by developing a tool called Potential Fraud Analysis, which applies a scoring algorithm to procurement contracts in order to identify those with a higher likelihood of fraud. Armed with this data, civil society groups, journalists, and citizens can then undertake further (analog) investigation in order to hold government units accountable for their use of resources.
The implementation of open data initiatives is often midwifed by civically minded programmers who write the code to display government data online. Efforts like those of ICW’s staff demonstrate the importance of computer programming skills in ensuring data accessibility, but these skills are still uncommon. The languages that underpin the ubiquitous websites and applications that have become a part of everyday life for many people around the globe remain a complete mystery to most of us. Fortunately, there are efforts underway to change that.
This is where I want to make a pitch for another kind of coding that I think is at least as important for citizens around the globe: coding for data analysis. Learning to code websites and applications is an essential skillset for visualizing and publishing open data online. But statistical analysis is what allows us to interrogate, test, and extract meaning from data, and many powerful data analysis applications (including open source options like “R” and other popular programs like Stata) rely on the use of command lines or formulas.
Learning a few key lines of code in one of these applications (and, more importantly, how to interpret the results they produce) opens the door for anyone—not just academics or researchers—to identify statistical trends and relationships related to the issues faced by their communities. It puts real power and flexibility in the hands of citizens to test the claims they hear from those in power and to back up advocacy with hard facts. Data has the potential to inspire powerful stories, but these stories must be unlocked through analysis.
This week, I’m in Mexico City for Condatos, the Latin America Regional Open Data Conference. The conference is an exciting example of burgeoning efforts to integrate programming, data analysis, and communication of open data. The agenda includes speakers and discussions with policymakers, entrepreneurs, researchers, data scientists, and data journalists, all of whom will be talking about the future of the open data ecosystem in the region. On the day before Condatos, the AbreLatam “unconference” will offer workshops and experiences such as a Data Bootcamp that will bring together 20 journalists, 20 programmers, and 20 designers to learn how to analyze and visualize open data. I’m looking forward to seeing firsthand some of the latest efforts to turn open data into truly open government.
Last time I wrote about Mexico’s Agentes de Innovación program, the teams had only just begun the co-creation process that the program hopes to encourage. Over the past several months, the teams have been hard at work further defining the problems they want to tackle, and beginning to ideate around potential products that they might develop.
Each of the teams was initially assigned an (ambitious) overarching theme taken from the country’s National Digital Strategy, which they then narrowed down to (almost-equally ambitious!) driving questions. During the first months of the program, the teams were asked to apply the human-centered design methodology to their issues by undertaking research on the needs of their target users. The projects have continued to evolve based on this research, as well as ongoing conversations within each of the host agencies and the teams’ considerations of priorities and constraints.
Here’s a rundown on the latest updates from the different teams:
The Universal Health team is working within IMSS, The Mexican Social Security Institute. They are asking, “How, through social innovation, can we bring IMSS services closer to the citizen?” In particular, they are focusing on the experience of maternity care for women at IMSS clinics. Maternity care at IMSS isn’t only for those women who expect to deliver at an IMSS clinic. Many more than that attend clinics for pre-natal visits, as it is a requirement that working women must complete to request official maternity leave benefits. The team is hoping to tackle both the administrative process and the care received by patients.
The team focused on Citizen Security is linked with the Ministry of the Interior. Their team is asking, “How can we involve citizens in the prevention of violence?” This team’s intervention is based on an existing platform, CIC, which allows citizens to anonymously report everything from traffic to criminal activity and has had great success in Monterrey, Mexico.
The team taking on Governmental Transformation is housed at the Finance Ministry, specifically within the group responsible for performance management of programs that are part of the federal budget. The team is asking, “How can we integrate levels of satisfaction and feedback from citizens on budgeted programs into the evaluation of their performance?” They will develop a way for program beneficiaries to provide feedback on the programs they use.
The Digital Economy team is based at the National Institute for the Entrepreneur. Given this affiliation, the project is focused specifically on the National Entrepreneurs Fund, which has a budget of nearly 9.4 billion pesos (over 710 million USD) for 2014. Their team has identified a need to improve entrepreneurs’ experience of the fund’s application and tracking process. They are asking, “How can we create a system for the Entrepreneurs Fund that facilitates and makes transparent the process for Mexican entrepreneurs?”
The Quality Education team, whose internal Agente works in the Educational Television section of the Ministry of Education, asked the question, “How can we rethink distance education based on new technological tools?” The team has decided to focus on the issue of students who are at risk for dropping out of school, and how they might be supported and inspired outside of the classroom.
Each of the teams has undertaken user research in their own way. As part of our own process, the Reboot team did some user research of our own in order to have a benchmark that we might use to better understand the teams’ design processes and decisions. In a wide-ranging (but, at only one week long, unusually short) research sprint, we conducted some 50 interviews across five locations.
We spoke to citizens relaxing in Puebla’s main square about citizen security, expectant mothers in Toluca about their experience of maternal care in primary clinics within the Mexican Social Security healthcare system, high school students in Mexico City about their expectations for the future, and entrepreneurs attending the “Week of the Entrepreneur” event in Mexico City about their experience accessing financial and other support.
Besides some intriguing findings for each of the individual projects, we were also left with some questions that we think are relevant for many in the sector trying to incorporate innovative processes.
When is the appropriate time to introduce technology to an innovation process? The great potential of technology can make it tempting to start with the assumption of a technology product or platform. When the real pain point is systemic or policy-related, the real power of technology may be as a tool to facilitate policy or behavior change rather than an end in itself.
Must an empathetic process produce an empathetic service? Human-centered design doesn’t always mean “be more human.” Sometimes, optimizing for the user just means something fast and intuitive—a two-click online solution rather than a phone call with a caring but chatty administrator, for example.
What are the limits of a protected innovation environment? Structured public innovation programs, like Agentes, often seek to create a protected space in which to incubate new ideas and approaches. At some point, however, any solution produced through such a process will have to be released into the wild and put to the test. We’re continuing to explore how innovators in the public sector can come to understand the necessary institutional prerequisites for (and the potential threats to) a product’s success, even while it is still being incubated.
Next week, the Agentes teams will present their projects at Condatos, the Latin America Open Data Conference, being held in Mexico City. We’re excited to see what they’ve been designing, and will report back more here. In the meantime, tell us what you think about the questions above.