featured


filtered by: paper


report | 04.03.18

Refugee Connectivity

Mark Latonero, Danielle Poole, and Jos Berens

Data & Society and the Harvard Humanitarian Initiative’s Refugee Connectivity: A Survey of Mobile Phones, Mental Health, and Privacy at a Syrian Refugee Camp in Greece” provides new evidence of the critical role internet connectivity and mobile devices play in the lives and wellbeing of this population. Findings are based on a survey of 135 adults amongst the 750 residents at Ritsona Refugee Camp in Greece.


report | 03.06.18

Spectrum of Trust in Data

Claire Fontaine and Kinjal Dave

New report finds providing school choice data to parents does not equalize educational opportunity, but rather replicates and perpetuates existing inequalities.

Data & Society Researcher Dr. Claire Fontaine and Research Assistant Kinjal Dave performed a qualitative, semi-structured, interview-based study with a socio-economically, racially, and geographically diverse group of 30 New York City parents and guardians between May and November 2017. Interviews focused on experiences of school choice; and data and information sources.


report | 02.26.18

Fairness in Precision Medicine

Kadija Ferryman and Mikaela Pitcan

Fairness in Precision Medicine is the first report to deeply examine the potential for biased and discriminatory outcomes in the emerging field of precision medicine; “the effort to collect, integrate and analyze multiple sources of data in order to develop individualized insights about health and disease.”


visualization | 02.26.18

Precision Medicine National Actor Map

Kadija Ferryman, Mikaela Pitcan

The Precision Medicine National Actor Map is the first visualization of the three major national precision medicine projects–All of Us Research Program, My Research Legacy, Project Baseline–and the network of institutions connected to them as grantees and sub-grantees.

The map was developed for the Fairness in Precision Medicine initiative at Data & Society.


report | 02.26.18

What is Precision Medicine?

Kadija Ferryman and Mikaela Pitcan


paper | 04.02.17

Combatting Police Discrimination in the Age of Big Data

Sharad Goel, Maya Perelman, Ravi Shroff, David Alan Sklansky

Sharad Goel, Maya Perelman, D&S fellow Ravi Shroff, and David Alan Sklansky examine a method that can “reduce the racially disparate impact of pedestrian searches and to increase their effectiveness”. Abstract is below:

The exponential growth of available information about routine police activities offers new opportunities to improve the fairness and effectiveness of police practices. We illustrate the point by showing how a particular kind of calculation made possible by modern, large-scale datasets — determining the likelihood that stopping and frisking a particular pedestrian will result in the discovery of contraband or other evidence of criminal activity — could be used to reduce the racially disparate impact of pedestrian searches and to increase their effectiveness. For tools of this kind to achieve their full potential in improving policing, though, the legal system will need to adapt. One important change would be to understand police tactics such as investigatory stops of pedestrians or motorists as programs, not as isolated occurrences. Beyond that, the judiciary will need to grow more comfortable with statistical proof of discriminatory policing, and the police will need to be more receptive to the assistance that algorithms can provide in reducing bias.


Columbia Law Review | 03.07.17

The Taking Economy: Uber, Information, and Power

Ryan Calo, Alex Rosenblat

Ryan Calo and D&S researcher Alex Rosenblat write this analysis of the newly termed ‘taking economy’ of Uber.

Sharing economy firms such as Uber and Airbnb facilitate trusted transactions between strangers on digital platforms. This creates economic and other value and raises a set of concerns around racial bias, safety, and fairness to competitors and workers that legal scholarship has begun to address. Missing from the literature, however, is a fundamental critique of the sharing economy grounded in asymmetries of information and power. This Article, coauthored by a law professor and a technology ethnographer who studies the ride-hailing community, furnishes such a critique and indicates a path toward a meaningful response.

Commercial firms have long used what they know about consumers to shape their behavior and maximize profits. By virtue of sitting between consumers and providers of services, however, sharing economy firms have a unique capacity to monitor and nudge all participants—including people whose livelihood may depend on the platform. Much activity is hidden away from view, but preliminary evidence suggests that sharing economy firms may already be leveraging their access to information about users and their control over the user experience to mislead, coerce, or otherwise disadvantage sharing economy participants.

This Article argues that consumer protection law, with its longtime emphasis of asymmetries of information and power, is relatively well positioned to address this under-examined aspect of the sharing economy. But the regulatory response to date seems outdated and superficial. To be effective, legal interventions must (1) reflect a deeper understanding of the acts and practices of digital platforms and (2) interrupt the incentives of sharing economy firms to abuse their position.


D&S lawyer-in-residence Rebecca Wexler provides an analysis on trade secrecy in the criminal justice system. Abstract is below:

From policing to evidence to parole, data-driven algorithmic systems and other automated software programs are being adopted throughout the criminal justice system. The developers of these technologies often claim that the details about how the programs work are trade secrets and, as a result, cannot be disclosed in criminal cases. This Article turns to evidence law to examine the conflict between transparency and trade secrecy in the criminal justice system. It is the first comprehensive account of trade secret evidence in criminal cases. I argue that recognizing a trade secrets evidentiary privilege in criminal proceedings is harmful, ahistorical, and unnecessary. Withholding information from the accused because it is a trade secret mischaracterizes due process as a business competition.


paper | 11.30.16

Victims Are Not Virtual: Situation assessment of online child sexual exploitation in South Asia

Mark Latonero, Monica Bulger, Bronwyn Wex, Emma Day, Kapil Aryal, Mariya Ali, Keith Hiatt

D&S researchers Mark Latonero and Monica Bulger, with Bronwyn Wex, Emma Day, Kapil Aryal, Mariya Ali, and Keith Hiatt, completed a thorough study on online child sexual exploitation in South aSIA.

This study identified an assumption that a technical fix must exist for problems identified as ‘online’. In the case of online child sexual exploitation, these assumptions are true, but limited. INTERPOL and the International Centre for Missing & Exploited Children (ICMEC) lead efforts to identify and take down CSAM images globally, a technological fix. Yet it is a finding of this study that combined with international response there is also a need for a local response to attend to the victims and perpetrators. Local response to online child sexual exploitation relies on the strength of the existing child protection system, locating treatment of abuse incidents regardless of where they occur, within an existing framework. It additionally addresses that a single child may be victim of multiple forms of abuse and seeking treatment from the same facilities.


paper | 10.19.16

Discriminating Tastes: Customer Ratings as Vehicles for Bias

Alex Rosenblat, Karen Levy, Solon Barocas, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang and D&S affiliates Solon Barocas and Karen Levy examine how bias may creep into evaluations of Uber drivers through consumer-sourced rating systems:

Through the rating system, consumers can directly assert their preferences and their biases in ways that companies are prohibited from doing on their behalf. The fact that customers may be racist, for example, does not license a company to consciously or even implicitly consider race in its hiring decisions. The problem here is that Uber can cater to racists, for example, without ever having to consider race, and so never engage in behavior that amounts to disparate treatment. In effect, companies may be able to perpetuate bias without being liable for it.”


D&S researchers Alex Rosenblat and Tim Hwang explore “the significant role of worker motivations and regional political environments on the social and economic outcomes of automation” in this new paper.

Preliminary observations of rideshare drivers and their changing working conditions reveals the significant role of worker motivations and regional political environments on the social and economic outcomes of automation. Technology’s capacity for social change is always combined with non-technological structures of power—legislation, economics, and cultural norms.


paper | 10.11.16

Automatically Processing Tweets from Gang-Involved Youth: Towards Detecting Loss and Aggression

Terra Blevins, Robert Kwiatkowski, Jamie C. Macbeth, Kathleen Mckeown, Desmond Patton, Owen Rambow

D&S affiliate Desmond Patton, with Terra Blevins, Robert Kwiatkowski, Jamie C. Macbeth, Kathleen Mckeown, and Owen Rambow, wrote this paper exploring a body of texts from a female gang member and examine patterns of speech that indicate an aggression trigger.

Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used, as well as a classifier for identifying tweets that express grieving and aggression.


paper | 10.02.16

Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI

Sarah Bird, Solon Barocas, Kate Crawford, Fernando Diaz, Hanna Wallach

Sarah Bird, Fernando Diaz, Hanna Wallach, with D&S affiliates Solon Barocas and Kate Crawford, wrote this analysis about implications in autonomous experimentation in AI.

In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

 


The Engine Room | 09.19.16

Responsible Data in Agriculture

Lindsay Ferris, Zara Rahman

D&S fellow Zara Rahman with Lindsay Ferris wrote an analysis discussing how data is used in agriculture and conclude with how to use said data responsibly.

The responsibility for addressing this does not lie solely with the smaller players in the sector, though. Practising responsible data approaches should be a key concern and policy of the larger actors, from Ministries of Agriculture to companies gathering and dealing with large amounts of data on the sector. Developing policies to proactively identify and address these issues will be an important step to making sure data-driven insights can benefit everyone in the sector.


paper | 09.16.16

The Wisdom of the Captured

Alex Rosenblat, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang analyze how widely captured data of technologies, which enable these technologies to make intelligent decisions, may negatively impact users.

More broadly, how might the power dynamics of user and platform interact with the marketing surrounding these technologies to produce outcomes which are perceived as deceptive or unfair? This provocation paper assembles a set of questions on the capacity for machine learning practices to create undisclosed violations of the expectations of users – expectations often created by the platform itself — when applied to public-facing network services. It draws on examples from consumer-facing services, namely GPS navigation services like Google Maps or Waze, and on the experiences of Uber drivers, in an employment context, to explore user assumptions about personalization in crowd-sourced, networked services.


D&S researcher Monica Bulger analyzes how ‘power learners’ utilize and contribute to online learning communities with researchers Cristobal Cabo, Jonathan Bright, and Ryan den Rooljen.

We propose two fundamental questions related to the role of these users in learning communities, whom we dub “power learners”. 1) Are power learners naturally committed to the learning community, or do they decide to become more involved progressively? Power users typically show high levels of activity from the beginning of their participation [13]; though studies have also suggested that power users can also learn as they develop [14]. Does the same hold true for online education? 2) Are power learners crucial for starting and maintaining online education communities? In other contexts, such as Wikipedia editing, active early involvement of a small mass of committed participants is crucial for starting collective activities [8]. Is this also the case when it comes to online education?


D&S researcher Robyn Caplan co-wrote a paper analyzing how many large well-known companies, such as Buzzfeed and Facebook, argue against being categorized as media companies. However, Caplan and co-writer Philip M. Napoli assert that this argument has led to a misclassification of these companies and such misclassification has profound policy implications.

A common position amongst online content providers/aggregators is their resistance to being characterized as media companies. Companies such as Google, Facebook, BuzzFeed, and Twitter have argued that it is inaccurate to think of them as media companies. Rather, they argue that they should be thought of as technology companies. The logic of this position, and its implications for communications policy, have yet to be thoroughly analyzed. However, such an analysis is increasingly necessary as the dynamics of news and information production, dissemination, and consumption continue to evolve. This paper will explore and critique the logic and motivations behind the position that these content providers/aggregators are technology companies rather than media companies, as well as the communications policy implications associated with accepting or rejecting this position.


paper | 05.23.16

Perspectives on Big Data, Ethics, and Society

Jacob Metcalf, Emily F. Keller, danah boyd

The Council for Big Data, Ethics, and Society has released a comprehensive white paper consolidating conversations and ideas from two years of meetings and discussions:

Today’s release marks a major milestone for the Council, which began in 2014 with support from the National Science Foundation and the goal of providing critical social and cultural perspectives on “big data” research initiatives. The work of the Council consistently surfaced conflicts between big data research methods and existing norms. Should big data methods be exempted from those norms? pushed into them? Are entirely new paradigms needed? The white paper provides recommendations in the areas of policy, pedagogy, and network building, as well as identifying crucial areas for further research. From the Executive Summary:

The Council’s findings, outputs, and recommendations—including those described in this white paper as well as those in earlier reports—address concrete manifestations of these disjunctions between big data research methods and existing research ethics paradigms. We have identified policy changes that would encourage greater engagement and reflection on ethics topics. We have indicated a number of pedagogical needs for data science instructors, and endeavored to fulfill some of them. We have also explored cultural and institutional barriers to collaboration between ethicists, social scientists, and data scientists in academia and industry around ethics challenges. Overall, our recommendations are geared toward those who are invested in a future for data science, big data analytics, and artificial intelligence guided by ethical considerations along with technical merit.


D&S Affiliate Elana Zeide’s Future of Privacy Forum paper identifies 19 studies where data was successfully used to evaluate a program, create a new strategy, or delve into equity and bias issues.

 “Properly used, mindfully implemented, and with appropriate privacy protections, student data is a tremendous resource to help schools fulfill the great promise of providing quality education for all.”


In this Working Paper from We Robot 2016, D&S Researcher Madeleine Elish employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems.

Abstract:

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.

As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there’s an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

 


paper | 03.10.16

Limitless Worker Surveillance

Ifeoma Ajunwa, Kate Crawford, Jason Schultz

D&S Affiliates Ifeoma Ajunwa, Kate Crawford, and Jason Schultz examine the effectiveness of the law as a check on worker surveillance, given recent technological innovations. This law review article focuses on popular trends in worker tracking – productivity apps and worker wellness programs – to argue that current legal constraints are insufficient and may leave American workers at the mercy of 24/7 employer monitoring. They also propose a new comprehensive framework for worker privacy protections that should withstand current and future trends.

Abstract:

From the Pinkerton private detectives of the 1850s, to the closed-circuit cameras and email monitoring of the 1990s, to contemporary apps that quantify the productivity of workers, American employers have increasingly sought to track the activities of their employees. Along with economic and technological limits, the law has always been presumed as a constraint on these surveillance activities. Recently, technological advancements in several fields – data analytics, communications capture, mobile device design, DNA testing, and biometrics – have dramatically expanded capacities for worker surveillance both on and off the job. At the same time, the cost of many forms of surveillance has dropped significantly, while new technologies make the surveillance of workers even more convenient and accessible. This leaves the law as the last meaningful avenue to delineate boundaries for worker surveillance.

 


paper | 03.10.16

Hiring by Algorithm

Ifeoma Ajunwa, Sorelle Friedler, Carlos E Scheidegger, Suresh Venkatasubramanian

D&S Fellow Sorelle Friedler and D&S Affiliate Ifeoma Ajunwa argue in this essay that well settled legal doctrines that prohibit discrimination against job applicants on the basis of sex or race dictate an examination of how algorithms are employed in the hiring process with the specific goals of: 1) predicting whether such algorithmic decision-making could generate decisions having a disparate impact on protected classes; and 2) repairing input data in such a way as to prevent disparate impact from algorithmic decision-making.

 

Abstract:

Major advances in machine learning have encouraged corporations to rely on Big Data and algorithmic decision making with the presumption that such decisions are efficient and impartial. In this Essay, we show that protected information that is encoded in seemingly facially neutral data could be predicted with high accuracy by algorithms and employed in the decision-making process, thus resulting in a disparate impact on protected classes. We then demonstrate how it is possible to repair the data so that any algorithm trained on that data would make non-discriminatory decisions. Since this data modification is done before decisions are applied to any individuals, this process can be applied without requiring the reversal of decisions. We make the legal argument that such data modifications should be mandated as an anti-discriminatory measure. And akin to Professor Ayres’ and Professor Gerarda’s Fair Employment Mark, such data repair that is preventative of disparate impact would be certifiable by teams of lawyers working in tandem with software engineers and data scientists. Finally, we anticipate the business necessity defense that such data modifications could degrade the accuracy of algorithmic decision-making. While we find evidence for this trade-off, we also found that on one data set it was possible to modify the data so that despite previous decisions having had a disparate impact under the four-fifths standard, any subsequent decision-making algorithm was necessarily non-discriminatory while retaining essentially the same accuracy. Such an algorithmic “repair” could be used to refute a business necessity defense by showing that algorithms trained on modified data can still make decisions consistent with their previous outcomes.


paper | 02.23.16

Auditing Black-box Models by Obscuring Features

Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, Suresh Venkatasubramanian

The ubiquity and power of machine learning models in society to determine and control an increasing number of real-world decisions presents a challenge.  D&S fellow Sorelle Friedler and a team of researchers have developed a technique to do black-box auditing of machine-learning classification models to gain a deeper understanding of these complex and opaque model behaviors.

Abstract: Data-trained predictive models are widely used to assist in decision making. But they are used as black boxes that output a prediction or score. It is therefore hard to acquire a deeper understanding of model behavior: and in particular how different attributes influence the model prediction. This is very important when trying to interpret the behavior of complex models, or ensure that certain problematic attributes (like race or gender) are not unduly influencing decisions. In this paper, we present a technique for auditing black-box models: we can study the extent to which existing models take advantage of particular features in the dataset without knowing how the models work. We show how a class of techniques originally developed for the detection and repair of disparate impact in classification models can be used to study the sensitivity of any model with respect to any feature subsets. Our approach does not require the black-box model to be retrained. This is important if (for example) the model is only accessible via an API, and contrasts our work with other methods that investigate feature influence like feature selection. We present experimental evidence for the effectiveness of our procedure using a variety of publicly available datasets and models. We also validate our procedure using techniques from interpretable learning and feature selection.


In this report D&S advisor Nick Grossman and Elizabeth Woyke examine the “gig economy” and the ways it is reshaping the modern workforce:

It’s difficult to ignore the effects of the “great unbundling” today. The digital revolution has already changed the nature of media, personal health, finance, and other economic and industrial sectors in recent years. As this O’Reilly report reveals, the modern workforce—including the very notion of a “job” itself—is undergoing a similar transformation.

Unbundling is the breaking up of traditional packages of goods and services into their component parts, eventually to be rebundled in new ways. In the same fashion, various job components—income, structure, social connections, meaning, and (in the US) access to healthcare—are being unbundled as well.

 


Arvind Narayanan is a computer scientist at Princeton. He advised a Master’s thesis, described in Section 2, that utilized a similar methodology to the Encore project. Bendert Zevenbergen is a Ph.D candidate and researcher at the Oxford Internet Institute where he studies the intersection of law, ethics, social science, and the Internet. Along with a colleague at OII, he first brought certain ethical concerns to the Encore authors’ attention, resulting in a significant change to the design.

This case study, written for the Council on Big Data, Ethics, and Society, is the result of a dialogue between Arvind Narayanan and Bendert Zevenbergen. It examines tricky ethical questions that arise when researchers co-opt Internet-connected devices as vantage points for data collection, without the knowledge or consent of the users of those devices.


paper | 08.13.15

The Digital CultureSHIFT: From Scale to Power

Center for Media Justice, ColorofChange.org, Data & Society

How the Internet is Shaping Social Change, and Social Change is Shaping the Internet

Summary
As activism for police accountability, fair wages, just immigration, and more takes center stage — social justice movements of the 21st century are using technology to achieve greater scale and reach wider audiences. But are these digital strategies building power for long-term social change, or helping maintain the status quo?

A new report from the Center for Media Justice says the answer depends on the strategy — and offers new approaches and recommendations, from a diverse cross-section of leaders, for building effective social movements in an age of big data and digital technology.

Key Takeaways
The strategies and approaches in the Digital CultureSHIFT report provide a path forward for addressing the way social movements integrate new approaches , or remain stuck in a cycle that limits our effectiveness.

What We Learned

  • 100% of those interviewed said that digital strategies and platforms provide a voice when mainstream media ignores issues.
  • The vast majority of leaders interviewed widely use digital platforms to catalyze action, but say over-reliance on these tools can limit relationship-building.
  • The Internet is helping to shift national organizations from centralized to decentralized, from geographically specific to geographically diverse, and from hierarchical leadership to multi-level leadership.
  • Targeted surveillance is a top concern — but the vast majority of leaders of color interviewed felt that advocacy for digital privacy did not include their voices or their visions for change.

paper | 07.07.15

Making Sense of the New Urban Science

Anthony Townsend, Alissa Chisholm

We are living in the age of cities. It is an urgent time, and an uncertain one. Never before have human beings built so much with such haste. Yet we understand so little about how our urban world grows — and sometimes — declines.

To meet this challenge, the world’s universities have set out to plug this knowledge gap, and establish a new science of cities. This report is an initial attempt to understand the collective scope and impact of this movement. What does this new science seek to achieve? Who are its practitioners? What questions are they pursuing? What methods do they use? What are they learning? How might their discoveries shape our shared urban destiny?


paper | 04.01.15

Researching children’s rights globally in the digital age

Sonia Livingstone, Jasmina Byrne, Monica Bulger

D&S researcher Monica Bulger, Sonia Livingstone, and Jasmina Byrne wrote up a summary Report of a seminar held on February, 12-14 2015 at the London School of Economics and Political Science examining whether and how children’s rights to provision, protection and participation are being enhanced or undermined in the digital age. 35 international experts met for three days at the LSE to share their collected expertise.

The aim of the meeting was to evaluate current understandings of the risks and opportunities afforded to children worldwide as they gain access to internet enabled technologies, and to explore the feasibility of developing a global research framework to examine these issues further.


Abstract: What will happen to current regimes of liability when driverless cars become commercially available? What happens when there is no human actor—only a computational agent—responsible for an accident? This white paper addresses these questions by examining the historical emergence and response to autopilot and cruise control. Through an examination of technical, social and legal histories, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automation. We argue that a potential legal crisis with respect to driverless cars and other autonomous vehicles is unlikely. Despite this, we propose that the debate around liability and autonomous systems be reframed to more precisely reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers. Our paper concludes by offering a set of policy principles to guide future legislation.

“Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation” is the first paper in the Intelligence and Autonomy project’s series of Comparative Studies in Intelligent Systems. Intelligence and Autonomy is supported by the John D. and Catherine T. MacArthur Foundation.

(Image from: International Telephone and Telegraph Corporation. May 1957. Advertisement. Broadcasting · Telecasting, 139.)


paper | 02.13.15

Technology and Labor Trafficking in a Network Society

Mark Latonero, Bronwyn Wex, Meredith Dank

D&S Fellow Mark Latonero and colleagues at USC Annenberg recently released a report on technology and labor trafficking. From USC Annenberg:

Migrant workers who are isolated from technology and social networks are more vulnerable to human trafficking, forced labor, and exploitation. These and other findings are detailed in a powerful new report, Technology and Labor Trafficking in a Network Society, released today by the Center for Communication Leadership & Policy (CCLP) at the University of Southern California’s Annenberg School for Communication & Journalism. This project was made possible by a grant from Humanity United, a U.S.-based foundation dedicated to building peace and advancing human freedom.

The report includes the story of a young woman from the Philippines who was stranded in Malaysia after being misled by a deceptive labor recruiter. Despite having a mobile phone she did not want to call her family and make them worry. While being transported to an unknown destination by her brokers, she was apprehended by police. Interrogated and imprisoned, she hid her phone and called a friend for help. After a month the Philippine government finally intervened. As it turned out, the woman’s phone served to connect and disconnect her with unscrupulous recruiters, as well as support.


D&S researcher Monica Bulger with Sonia Livingstone wrote a response to the the ten year review of the WSIS implementation.

There is much room for improvement to achieve the ‘people-centred, inclusive and
development-oriented Information Society’ for children. Current protection and provision
for children are fragmented and unevenly implemented, even in developed countries,
and largely non-existent when considered globally.


paper | 09.21.11

Six Provocations for Big Data

danah boyd, Kate Crawford

This essay offers a multi-discplinary social analysis of the “Big Data” phenomenon with the goal of sparking a conversation, and it continues to provide a point of reference for the launch and development of Data & Society.

Abstract: The era of “Big Data” has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and many others are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing information from Twitter, Google, Verizon, 23andMe, Facebook, Wikipedia, and every space where large groups of people leave digital traces and deposit data. Significant questions emerge. Will large-scale analysis of DNA help cure diseases? Or will it usher in a new wave of medical inequality? Will data analytics help make people’s access to information more efficient and effective? Or will it be used to track protesters in the streets of major cities? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Some or all of the above?


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.