Technology can often do more harm than good in humanitarian situations. In an op-ed for The New York Times, Research Lead Mark Latonero argues against surveillance humanitarianism.
“Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.”
report | 04.15.19
Digital Identity in the Migration & Refugee Context analyzes the challenges of continually collecting identity data from migrants & refugees.
In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.
The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.
points | 06.19.18
Data & Society Research Analyst Melanie Penagos summarizes three blogposts that came as a result of Data & Society’s AI & Human Rights Workshop in April 2018.
“Following Data & Society’s AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony.”
Washington Journal of Law, Technology & Arts | 06.07.18
Data & Society Data & Human Rights Research Lead Mark Latonero and Zachary Gold look at the history of web crawlers usage and legal issues surrounding that usage.
“This paper discusses the history of web crawlers in courts as well as the uses of such programs by a wide array of actors. It addresses ethical and legal issues surrounding the crawling and scraping of data posted online for uses not intended by the original poster or by the website on which the information is hosted. The article further suggests that stronger rules are necessary to protect the users’ initial expectations about how their data would be used, as well as their privacy.”
points | 05.11.18
On April 26-27, Data & Society hosted a multidisciplinary workshop on AI and Human Rights. In this Points piece, Data + Human Rights Research Lead Mark Latonero and Research Analyst Melanie Penagos summarize discussions from the day.
“Can the international human rights framework effectively inform, shape, and govern AI research, development, and deployment?”
For the book “New Technologies for Human Rights Law and Practice,” Data & Society Researcher Mark Latonero raises privacy concerns when big data analytics are used in a human rights context.
“This chapter argues that the use of big data analytics in human rights work creates inherent risks and tensions around privacy. The techniques that comprise big data collection and analysis can be applied without the knowledge, consent, or understanding of data subjects. Thus, the use of big data analytics to advance or protect human rights risks violating privacy rights and norms and may lead to individual harms. Indeed, data analytics in the human rights monitoring context has the potential to produce the same ethical dilemmas and anxieties as inappropriate state or corporate surveillance. Therefore, its use may be difficult to justify without sufficient safeguards. The chapter concludes with a call to develop guidelines for the use of big data analytics in human rights that can help preserve the integrity of human rights monitoring and advocacy.”
Data & Society and the Harvard Humanitarian Initiative’s “Refugee Connectivity: A Survey of Mobile Phones, Mental Health, and Privacy at a Syrian Refugee Camp in Greece” provides new evidence of the critical role internet connectivity and mobile devices play in the lives and wellbeing of this population. Findings are based on a survey of 135 adults amongst the 750 residents at Ritsona Refugee Camp in Greece.
Social Media + Society | 03.20.18
Data & Human Rights Research Lead Mark Latonero investigates the impact of digitally networked technologies on the safe passage of refugees and migrants.
“…in making their way to safe spaces, refugees rely not only on a physical but increasingly also digital infrastructure of movement. Social media, mobile devices, and similar digitally networked technologies comprise this infrastructure of ‘digital passages’—sociotechnical spaces of flows in which refugees, smugglers, governments, and corporations interact with each other and with new technologies.”
In this essay, D&S Fellow Taeyoon Choi interrogates technology designed for those with disabilities.
“Even with the most advanced technology, disability can not and—sometimes should not—disappear from people. There are disabled people whose relationship with their own bodily functions and psychological capabilities cannot be considered in a linear movement from causation to result, where narratives of technology as cure override the real varieties in people’s needs and conditions and falsely construct binary states—one or the other, abled or disabled—shadowing everything between or outside of those options.”
Social Media + Society | 02.01.18
Data & Society Media Manipulation Lead Joan Donovan investigates the development of InterOccupy, a virtual organization operated by participants in the Occupy Movement.
“InterOccupy took infrastructure building as a political strategy to ensure the movement endured beyond the police raids on the encampments. I conclude that NSMs create virtual organizations when there are routine and insurmountable failures in the communication milieu, where the future of the movement is at stake. My research follows the Occupy Movement ethnographically to understand what happens after the keyword.”
Ford Foundation blog | 05.30.17
D&S affiliate Wilneida Negrón details the role of bots and automation in activism today.
As everyone from advertisers to political adversaries jockey for attention, they are increasingly using automated technologies and processes to raise their own voices or drown out others. In fact, 62 percent of all Internet traffic is made up of programs acting on their own to analyze information, find vulnerabilities, or spread messages. Up to 48 million of Twitter’s 320 million users are bots, or applications that perform automated tasks. Some bots post beautiful art from museum collections, while some spread abuse and misinformation instead. Automation itself isn’t cutting edge, but the prevalence and sophistication of how automated tools interact with users is.
Harvard Business Review | 05.16.17
D&S researcher Mark Latonero provides an overview of the role of large tech companies in refugee crises.
While the 40-page brief is filled with arguments in support of immigration, it hardly speaks about refugees, except to note that those seeking protection should be welcomed. Any multinational company with a diverse workforce would be concerned about limits to international hiring and employee travel. But tech companies should also be concerned about the refugee populations that depend on their digital services for safety and survival.
The Guardian | 02.02.17
D&S affiliate Keith Hiatt, Michael Kleinman, and D&S researcher Mark Latonero think critically about the usage of technology as a all-encompassing solution in human rights spaces.
It’s important to acknowledge that, most of the time, the underlying problem human rights organisations are trying to solve isn’t technical. It’s often a bureaucratic, institutional, process or workflow problem, and technology won’t solve it (and might exacerbate it).
New Media & Society | 01.16.17
D&S researcher Monica Bulger, with Patrick Burton, Brian O’Neill, and Elisabeth Staksrud, writes “Where policy and practice collide: Comparing United States, South African and European Union approaches to protecting children online”.
That children have a right to protection when they go online is an internationally well-established principle, upheld in laws that seek to safeguard children from online abuse and exploitation. However, children’s own transgressive behaviour can test the boundaries of this protection regime, creating new dilemmas for lawmakers the world over. This article examines the policy response from both the Global North and South to young people’s online behaviour that may challenge adult conceptions of what is acceptable, within existing legal and policy frameworks. It asks whether the ‘childhood innocence’ implied in much protection discourse is a helpful basis for promoting children’s rights in the digital age. Based on a comparative analysis of the emerging policy trends in Europe, South Africa and the United States, the article assesses the implications for policymakers and child welfare specialists as they attempt to redraw the balance between children’s online safety while supporting their agency as digital citizens.
paper | 11.30.16
D&S researchers Mark Latonero and Monica Bulger, with Bronwyn Wex, Emma Day, Kapil Aryal, Mariya Ali, and Keith Hiatt, completed a thorough study on online child sexual exploitation in South aSIA.
This study identified an assumption that a technical fix must exist for problems identified as ‘online’. In the case of online child sexual exploitation, these assumptions are true, but limited. INTERPOL and the International Centre for Missing & Exploited Children (ICMEC) lead efforts to identify and take down CSAM images globally, a technological fix. Yet it is a finding of this study that combined with international response there is also a need for a local response to attend to the victims and perpetrators. Local response to online child sexual exploitation relies on the strength of the existing child protection system, locating treatment of abuse incidents regardless of where they occur, within an existing framework. It additionally addresses that a single child may be victim of multiple forms of abuse and seeking treatment from the same facilities.
OpenDemocracy.com | 11.11.16
The Engine Room | 09.28.16
D&S fellow Zara Rahman explores the need for access to information and open data, i.e. the right to know.
Given these growing threats, combined with our increased knowledge of government secrecy and surveillance, and new possibilities through widespread technologies, it feels like we should be focusing more than ever on strengthening our right to information. This means directing funding towards it, supporting the established RTI community, and directing resources towards exercising our right to information when we can.
working paper | 05.25.16
D&S Fellow Mark Latonero produced a working paper presenting research done in collaboration with Sheila Murphy, Patricia Riley, and Prawit Thainiyom at the University of Southern California under USAID’s C-TIP Campus Challenge Research Grant initiative. The research used a public opinion survey to assess how an MTV Exit documentary changed knowledge, attitudes, and behaviors related to trafficking vulnerability among the target population in Indonesia. The research also included a social media analysis to assess how activists in Indonesia frame discussions around human trafficking. The paper was produced under the Democracy Fellows and Grants (DFG) program, which is funded through USAID’s Center of Excellence in Democracy, Human Rights, and Governance (DRG Center) and managed by the Institute for International Education.
The researchers sought to generate data to inform the design of programs to raise awareness about trafficking among vulnerable populations and influence knowledge, attitudes, and practices related to trafficking. This paper focuses on research conducted in Indonesia by a team led by the University of Southern California (USC).
The USC team’s research in Indonesia included two components: a public opinion survey and an analysis of social media, both implemented in 2014. The public opinion survey was administered in Indramayu, West Java, Indonesia, a district with more than 1.77 million people that is a “hot spot” for human trafficking. USC administered the survey twice, with 527 participants; between the first and second wave, 319 of the participants watched an MTV Exit documentary on Indonesians’ experiences with human trafficking. USC conducted the social media assessment from May to July 2014, searching Twitter, Facebook, and YouTube in Indonesia for posts containing one or more of seven key words that would indicate that a post was about human trafficking. Key findings include:
D&S fellow Mark Latonero considers recent attempts by policymakers, big tech companies, and advocates to address the deepening refugee and migrant crisis and, in particular, the educational needs of displaced children through technology and app development projects. He cautions developers and policymakers to consider the risks of failing to understand the unique challenges facing refugee children living without running water, let alone a good mobile network.
The reality is that no learning app or technology will improve education by itself. It’s also questionable whether mobile apps used with minimal adult supervision can improve a refugee child’s well-being. A roundtable at the Brookings Center for Universal Education noted that “children have needs that cannot be addressed where there is little or no human interaction. A teacher is more likely to note psychosocial needs and to support children’s recovery, or to refer children to other services when they are in greater contact with children.” Carleen Maitland, a technology and policy professor who led the Penn State team, found through her experience at Zaatari that in-person interactions with instructors and staff in the camp’s many community centers could provide far greater learning opportunities for young people than sitting alone with a mobile app.
In fact, unleashing ed tech vendors or Western technologists to solve development issues without the appropriate cultural awareness could do more harm than good. Children could come to depend on technologies that are abandoned by developers once the attention and funding have waned. Plus, the business models that sustain apps through advertising, or collecting and selling consumer data, are unethical where refugees are concerned. Ensuring data privacy and security for refugee children using apps should be a top priority for any software developer.
In cases where no in-person education is available, apps can still play a role, particularly for children who feel unsafe to travel outside their shelters or are immobile owing to injuries or disabilities. But if an app is to stand a chance of making a real difference, it needs to arise not out of a tech meet-up in New York City but on a field research trip to a refugee camp, where it will be easier to see how mobile phones are actually accessed and used. Researchers need to ask basic questions about the value of education for refugees: Is the goal to inspire learning on traditional subjects? Empower students with academic credentials or job skills? Assimilate refugees into their host country? Provide a protected space where children can be fed and feel safe? Or combat violent extremism at an early age?
To decide, researchers need to put the specific needs of refugee children first—whether economic, psychosocial, emotional, or physical—and work backward to see whether technology can help, if at all.
Surveillance & Society | 08.16.11
Researchers Alexandra Mateescu and Alex Rosenblat published a paper with D&S Founder danah boyd examine police-worn body cameras and their potential to provide avenues for police accountability and foster improved policy-community relations. The authors raise concerns about potential harmful consequences of constant surveillance that has sparked concerns from civil rights groups about how body-worn cameras may violate privacy and exacerbate existing police practices that have historically victimized people of color and vulnerable populations. They consider whether one can demand greater accountability without increased surveillance at the same time and suggest that “the trajectory laid out by body-worn cameras towards greater surveillance is clear, if not fully realized, while the path towards accountability has not yet been adequately defined, let alone forged.”
The intimacy of body-worn cameras’ presence—which potentially enables the recording of even mundane interpersonal interactions with citizens—can be exploited with the application of technologies like facial recognition; this can exacerbate existing practices that have historically victimized people of color and vulnerable populations. Not only do such technologies increase surveillance, but they also conflate the act of surveilling citizens with the mechanisms by which police conduct is evaluated. Although police accountability is the goal, the camera’s view is pointed outward and away from its wearer, and audio recording captures any sounds within range. As a result, it becomes increasingly difficult to ask whether one can demand greater accountability without increased surveillance at the same time.
Crafting better policies on body-worn camera use has been one of the primary avenues for balancing the right of public access with the need to protect against this technology’s invasive aspects. However, no universal policies or norms have been established, even on simple issues such as whether officers should notify citizens that they are being recorded. What is known is that body-worn cameras present definite and identifiable risks to privacy. By contrast, visions of accountability have remained ill-defined, and the role to be played by body-worn cameras cannot be easily separated from the wider institutional and cultural shifts necessary for enacting lasting reforms in policing. Both the privacy risks and the potential for effecting accountability are contingent upon an ongoing process of negotiation, shaped by beliefs and assumptions rather than empirical evidence.
D&S Advisor Ethan Zuckerman pushes back against a new myth developing around Bitcoin as a ready-made solution to complex humanitarian and international development problems around the globe:
Is Bitcoin really the best way to think about establishing a digital commons for financial transactions? Maybe not. The Bitcoin network requires large amounts of bandwidth to run and uses enormous amounts of power, which makes it challenging for people in the developing world to participate in mining or use the network reliably.
The challenges of financial inclusion in a place like Kenya are diverse, from the cost of sending and receiving money, to the difficulty in doing business across borders, and the concentrated power of Safaricom. Perhaps Bitcoin could spur financial innovation there. But there are no guarantees. Understanding what Bitcoin can do for people in the developing world will first require a better understanding of the people who live there.
D&S Fellow Mark Latonero considers the digital infrastructure for movement of refugees — the social media platforms, mobile apps, online maps, instant messaging, translation websites, wire money transfers, cell phone charging stations, and Wi-Fi hotspots — that is accelerating the massive flow of people from places like Syria, Iraq, and Afghanistan to Greece, Germany, and Norway. He argues that while the tools that underpin this passage provide many benefits, they are also used to exploit refugees and raise serious questions about surveillance.
Refugees are among the world’s most vulnerable people. Studies have shown that undue surveillance towards marginalized populations can drive them off the grid. Both perceived and real fears around data collection may result in refugees seeking unauthorized routes to European destinations. This avoidance strategy can make them invisible to officials and more susceptible to criminal enterprises. Data collection on refugees should balance security and public safety with the need to preserve human dignity and rights. Governments and refugee agencies need to establish trust when collecting data from refugees. Technology companies should acknowledge their platforms are used by refugees and smugglers alike and create better user safety measures. As governments and leaders coordinate a response to the crisis, appropriate safeguards around data and technology need to be put in place to ensure the digital passage is safe and secure.
GeoJournal | 08.01.15
blog post | 06.22.15
In this guest blogpost for the Responsible Data Forum, D&S fellow Mark Latonero provides an overview of the issues and tensions addressed by the Data, Human Rights & Human Security primer he coauthored with research analyst Zack Gold.
(The post was subsequently published on MasterCard Center for Inclusive Growth Insights.)
“In today’s global digital ecosystem, mobile phone cameras can document and distribute images of physical violence. Drones and satellites can assess disasters from afar. Big data collected from social media can provide real-time awareness about political protests. Yet practitioners, researchers, and policymakers face unique challenges and opportunities when assessing technological benefit, risk, and harm. How can these technologies be used responsibly to assist those in need, prevent abuse, and protect people from harm?”
Mark Latonero and Zachary Gold address the issues in this primer for technologists, academics, business, governments, NGOs, intergovernmental organizations — anyone interested in the future of human rights and human security in a data-saturated world.
paper | 04.01.15
D&S researcher Monica Bulger, Sonia Livingstone, and Jasmina Byrne wrote up a summary Report of a seminar held on February, 12-14 2015 at the London School of Economics and Political Science examining whether and how children’s rights to provision, protection and participation are being enhanced or undermined in the digital age. 35 international experts met for three days at the LSE to share their collected expertise.
The aim of the meeting was to evaluate current understandings of the risks and opportunities afforded to children worldwide as they gain access to internet enabled technologies, and to explore the feasibility of developing a global research framework to examine these issues further.
paper | 02.13.15
D&S Fellow Mark Latonero and colleagues at USC Annenberg recently released a report on technology and labor trafficking. From USC Annenberg:
Migrant workers who are isolated from technology and social networks are more vulnerable to human trafficking, forced labor, and exploitation. These and other findings are detailed in a powerful new report, Technology and Labor Trafficking in a Network Society, released today by the Center for Communication Leadership & Policy (CCLP) at the University of Southern California’s Annenberg School for Communication & Journalism. This project was made possible by a grant from Humanity United, a U.S.-based foundation dedicated to building peace and advancing human freedom.
The report includes the story of a young woman from the Philippines who was stranded in Malaysia after being misled by a deceptive labor recruiter. Despite having a mobile phone she did not want to call her family and make them worry. While being transported to an unknown destination by her brokers, she was apprehended by police. Interrogated and imprisoned, she hid her phone and called a friend for help. After a month the Philippine government finally intervened. As it turned out, the woman’s phone served to connect and disconnect her with unscrupulous recruiters, as well as support.
In this piece, Data & Society fellow Karen Levy criticizes the oversimplifications of technological tools that attempt to “solve” rape. “It’s encouraging to see techies trying to address knotty social issues like sexual violence. But if technology is going to intervene for good, it needs to adopt a more nuanced approach — one that appreciates that not every problem can be treated as a data problem.”
Within some public policy and scholarly accounts, human trafficking is increasingly understood as a technological problem that invites collaborative anti-trafficking solutions. A growing cohort of state, non-governmental, and corporate actors in the United States have come together around the shared contention that technology functions as both a facilitator and disrupting force of trafficking, specifically sex trafficking. Despite increased attention to the trafficking-technology nexus, scant research to date has critically unpacked these shifts nor mapped how technology reconfigures anti-trafficking collaborations. In this article, we propose that widespread anxieties and overzealous optimism about technology’s role in facilitating and disrupting trafficking have simultaneously promoted a tri-part anti-trafficking response, one animated by a law and order agenda, operationalized through augmented internet, mobile, and networked surveillance, and maintained through the integration of technology experts and advocates into organized anti-trafficking efforts. We suggest that an examination of technology has purchase for students of gender, sexuality, and neoliberal governmentality in its creation of new methods of surveillance, exclusion, and expertise.