Core Questions in Technology and Human Rights

Two weeks ago, I had the privilege of attending the 2011 Personal Democracy Forum conference where I sat in on a panel about technology and human rights. The discussion ranged from safety concerns for human rights activists using social media (see Jillian C. York’s “Safety and Social Networks in the Middle East”) to whether niche human rights technology platforms might be a better option (not necessarily, says Movement.org’s Susannah Vila). The question, however, that I’ve been mulling over is how and when might the actions of developers and designers in Silicon Valley or New York City lead to grave consequences for citizens a world away.

In a globalized world, technology originally designed to solve a particular problem for a subset of people in the US (say, allowing college students to stay in touch and flirt through wall posts, photos, and pokes) very well could spread to another part of the world and be used for a completely different purpose (say, organizing protests, demonstrations, and movements in the Middle East). Given this, how can software developers, technology enthusiasts, and designers better anticipate and design for unexpected use-cases? How can we come together to evaluate the impact of our decisions on the lives of people throughout the world?

One concept that may guide future thinking on this matter is “intersectionality.” A term originally coined by feminist thinkers, intersectionality “is a tool for analysis, advocacy, and policy development that… helps us understand how different sets of identities impact on access to rights and opportunities” (from “Intersectionality: A Tool for Gender and Economic Justice”). The basic premise is that individuals hold not one identity, but instead several intersecting ones. Policies designed to empower all women may end up affecting rich women differently than poor, elderly women differently than young girls, or minority women differently than the general female population. Given this, the Association for Women’s Rights in Development recommends that policy makers and program managers consider a core set of questions when designing gender-friendly projects.

Though conceptually simple, intersectionality is easily overlooked in practice. All too often, “women” become one end-user of technology, “human rights activists” another, and “youth” another. Thinking about users through the lens of intersectionality might have important implications for technology and human rights and, if done more often, might help us better evaluate the consequences of various technologies.

At Reboot, we think through issues of intersectionality when developing and implementing projects. Take, for instance, our recent branchless banking project in Pakistan, where we supported the efforts of a bank in expanding access to basic financial services for flood victims and others in need. One goal of that project was lend depth, nuance and diversity to the broad group of ‘unbanked’ Pakistanis. We asked questions like, how might this potential mobile banking platform affect Pakistani women differently than men? Would branchless banking place the elderly, illiterate, and others less adept at learning new technologies at a disadvantage? Could indirect stakeholders, such as the government, use the platform to monitor citizens’ banking behaviors?

These questions informed the design of our project, and their careful consideration ultimately contributed to the success of our recommendations. Yet, instead of asking these questions on an ad-hoc basis – company-by-company, project-by-project – perhaps it’s time for designers, technologists, and subject area experts to come together and determine a core set of questions we all need to consider when creating new products and services? Building upon Jeff Jarvis’ proposed Hippocratic Oath for the internet, perhaps a series of questions might generate much-needed discussion around this topic.

Of course, no one can predict how a technology may morph over time and space. The point of these core questions would not be to nail down a moving target, but instead to conceptualize how technology might affect different populations in drastically different ways.

These core questions are still in the infancy stage, but some might include:

  • What are some potential communities throughout the world that may use this technology? Can user research be done to better understand the potential effects of the technology on various groups within the communities?
  • How might this technology affect different users, and especially traditionally marginalized or less powerful groups of society? For example, how might a product be used by women differently than men, rich differently than poor, young vs. old, human rights activists vs. casual user, state actor vs. civil society actor, etc.?
  • What laws or other limitations (political, social, or economic) throughout the world might limit the potential of this new technology?

We will never be able to predict all of the consequences of future technologies. But the first step in ensuring that technology and human rights thrive together is to start conceptualizing a few core questions – and asking these questions of one another. Until we consider technology’s impact on a diverse range of actors with intersecting identities, we’ll never have products and services that are responsive to the needs of a global citizenry.

Given the collaborative nature of these core questions, we welcome your feedback. What are we missing? What other questions could help inform the design of technology-driven programs and services? We invite you to join this conversation with your thoughts and ideas on a very thorny issue.

Emma Gardner is the Director of Communications and Outreach at Reboot. You can contact her at emma @ thereboot (dot) org or on Twitter at @EmmaBGardner.

 

Futher reading.