From explanations of why Black Lives Matter is a critical movement to the spin room after the second Presidential Debate, implicit bias was all over the headlines in 2016. The dozens of think pieces explaining implicit bias—attitudes or stereotypes we unconsciously associate to others—and how it affects everyday life got everyone talking about something researchers already know: Bias exists. We grapple with it in our work every day. We know it comes from multiple places, including the researcher, the subject, and the environment. Recognizing biases, and keeping them in check, is an ongoing process. And the stakes are high: Research affects policy, policy affects people.
Although we cannot eliminate biases—jumping to conclusions is part of normal, healthy mental functioning—we have a duty to acknowledge them and to account for ways bias colors our interpretations. Neglecting to acknowledge and account for bias throughout a research process leads to an extension of status-quo thought patterns and actions. Lack of action is not a neutral position; it is a decision to leave hegemonic power structures unchecked. But how do we design bias checks into the design research process?
I’m a nerd for the philosophy behind research methods, and even did a master’s degree in Research for International Development. This critical training has given me a base to evaluate methodology—but sometimes my internal conversation bends to the theoretical. When I came to Reboot, I was excited to learn how my colleagues built bias checks throughout the research and design process in practice. I spoke to several team members about how they account for bias in their work, and saw how checking for bias is a continuous part of all Reboot research. Below I’ve shared three broad time points (among many) when we account for bias in our work.
Every project at Reboot starts with background research on the issue we’re working on, context we’re working in, and actors we’ll be working with. This stage is a good time to think about how we will build bias checks into the project as a whole and take note of biases that influenced previous research on the topic. Some researchers argue against doing too much background research, suggesting instead that a beginner’s eye is an advantage because it allows for a completely fresh perspective. However, that “fresh perspective” will still be biased by previous experiences—not to mention the value of learning from what’s been tried and what’s worked or not and why. Background research helps ensure that we think holistically about the research question, build on prior experience, and think critically about our own and others’ biases.
Desk research helps inform which political-economy considerations, institutional and cultural ways of working, and additional structural factors (not captured in regression analysis) may need to be considered in our approach, as well as which methods are best suited for investigation. As we get deeper into field research, we are always ready to assimilate new insights and adjust based on both our desk research and real-time data collection.
For example, during background research for a project on teacher absenteeism in Nigeria, Reboot saw that past interventions on the issue had been overly individualistic and punitive, which put blame on teachers for being absent and didn’t explore structural or contextual factors. The assumption was that teachers were the problem. This assumption was an important plot point because it affected how teachers were viewed by their employers as well as the community, which was, to put it bluntly: lazy. However, the actual experience of being a teacher was not interrogated or addressed. Recognizing that bias may have influenced past interventions on teacher performance, we widened the study’s aperture to include larger systemic and structural challenges that affect teacher motivation and the perspectives of teachers, parents, and government officials. Among other things, field research uncovered that teachers felt disempowered by the teacher placement process, an insight useful to education policy-makers interested in better motivating teachers. Background research can be a good place to check bias, including biases that may have inhibited the success of previous interventions.
Checking biases doesn’t stop with the research planning phase. Reboot continuously checks while conducting research to ensure ongoing data collection and analysis is bias-aware. One way we do this is by hiring local researchers—that is, researchers from the community or context we are investigating—for every project, including for projects in New York City, where many of us are based, and those in other countries where we are nonetheless fluent in the local language. Inputs from researchers within the community can trigger valuable reflection for the team, particularly when a local researcher’s interpretation of an event or conversation is different than the Rebooter’s. Differing interpretations are often rooted in biases held by one party, and unpacking these biases may lead to a new insight. Yes, local researchers will also be biased; because everyone is. However, they can help balance the insider and outsider perspectives, interpretations, and viewpoints that are especially thorny to manage in qualitative research.
In our recent work with the Open Government Partnership (OGP) and the Government of Jalisco, Mexico, we worked worked with local researchers to conduct interviews throughout Guadalajara. During the hiring interview for one of the local researchers, we asked what she thought about the local civil society landscape. She responded that it is practically non-existent. Once we got into arranging and conducting interviews, however, we came across respondents who were actively engaged in an incredibly vibrant civil society community in Jalisco. As we discussed these data points in synthesis, our local researcher acknowledged that perhaps her previous assumption about civil society being non-existent was based on an unconscious bias of what she though civil society looked like—sleek and well-resourced NGOs and foundations, instead of the grassroots activists in her community. She noted that she had changed her view through the research process, celebrating aspects of her community she had previously taken for granted. Reboot used this insight to begin building stronger inroads and connections between those active in civil society and those unaware of its presence.
Prototypes are another valuable tool for checking bias. Through the prototyping process we are able to elicit biases or assumptions in ways that aren’t always apparent in a conversational interview format. That’s why Reboot builds prototypes throughout our work, not just during the official “prototype” stage of the design process. In our recent work with OGP in Elgeyo-Marakwet, Kenya, using prototyping helped the team learn in a way that added nuance to primary research insights while also avoiding narrowing in on one solution or assumption too early. Insights from iterative and continuous research and prototyping can feed into each other.
In Elgeyo-Marakwet, we worked with the County Communications Department to create a simple prototype: A visual map, made with ordinary markers and paper, of a proposed new process for gathering citizen feedback. The team then brought this visual prototype to other relevant departments—for example, discussing feedback about transportation with the Department of Roads.
Although the prototype was based on user interviews with the Department of Roads, the new visual sparked more constructive conversations. The visuals gave participants a tangible example; as a result, they elicited clear, nuanced feedback from counterparts. Through the prototype, the team discovered that they had made a few assumptions that conflicted with the operating procedures of the Department of Roads. Building a visual process map helped get all the stakeholders on the same page—instead of remaining stuck in their separate biases—early in the research process. The result was a more integrated feedback process between citizens and multiple government departments.
In Thinking Fast and Thinking Slow, Daniel Kahneman says that our brains are constantly trying to put new information in the context of existing thought patterns. He calls this inherent function the “associative machine,” and it’s an important part of our evolutionary learning process. As toddlers, we learn not to touch a hot oven by getting burned. When we see different stoves at our friends’ houses, we don’t have to touch them to learn whether we might burn ourselves.
While it keeps us safe, the associative machine also builds false associations based on something we learned in one context, or at one time, or with partial information. This means we are susceptible to bias as part of our normal mental functioning. We will never be free of biases, and we don’t have to feel bad about having them. We also can’t let them run our lives or color our work—we shouldn’t let them drive our worldview, our actions, OR our research unchecked.
As researchers we have great power to tell stories through our work, and therefore a great responsibility to the people who share their lives with us. The only way for a researcher to get glimpses outside of his or her worldview is to build bias checks throughout the research cycle; reminders that we are all associative machines. It sounds like learning how to dodge bullets in the Matrix, right? But you’ll be surprised—bias checks are like the red pill. Once you start looking, you’ll see your own bias everywhere—and doing so is critical to waking up, fighting Agent Smith, and changing the world for the better.