Technology can often do more harm than good in humanitarian situations. In an op-ed for The New York Times, Research Lead Mark Latonero argues against surveillance humanitarianism.
“Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.”
Research Analyst Kinjal Dave urges us to move past the individual framing of “bias” to critically examine broader socio-technical systems.
“When we stop overusing the word ‘bias,’ we can begin to use language that has been designed to theorize at the level of structural oppression.”
paper | 05.22.19
In their new paper Advancing Racial Literacy in Tech, Data & Society 2018-19 Fellows Jessie Daniels and Mutale Nkonde and 2017-18 Fellow Darakhshan Mir urge tech companies to adopt racial literacy practices in order to break out of old patterns.
Conceived and launched under Data & Society’s fellowship program, this paper moves past conversations of implicit bias to think about racism in tech at a systems-level. The authors offer strategies grounded in intellectual understanding, emotional intelligence, and a commitment to take action.
“The real goal of building capacity for racial literacy in tech is to imagine a different world, one where we can break free from old patterns. This will take real leadership to take this criticism seriously and a willingness to assess the role that tech products, company culture, and supply chain practices may have in perpetuating structural racism.”
To follow the project and learn more, visit https://racialliteracy.tech/.
As AI becomes integrated into our everyday lives, how might it be gamed by humans, and how should we reconceptualize our notions of security?
“It is imperative to leverage a socio-technical frame to conceptualize safe and secure AI.”
In this Points essay, Researcher Alex Rosenblat connects Uber’s ideology to epistemological fragmentation in the U.S.
“Uber’s technology ideology comes from Silicon Valley, and how that becomes entrenched in law and practice is a microcosm of a larger political battle for power and governance.”
The New York Times | 05.07.19
The Algorithmic Accountability Act is a step forward, but there’s still room for improvement. Postdoctoral Scholar Andrew Selbst and Margot Kaminski explain.
“The bill is a meaningful first step in addressing the problems with algorithmic decision-making. Companies must be pushed to consider and document what goes into algorithm design. They should be pushed, too, to come up with solutions. But the bill is lacking in three main areas.”
Engaging Science, Technology and Society | 05.01.19
In this article, Research Lead Madeleine Clare Elish investigates who bears the responsibility when an automated system fails.
“Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.”
The New York Times | 04.25.19
In this op-ed for The New York Times, Data & Society Research Lead Mary Madden argues that there is no “one size fits all” solution to privacy concerns in the digital age.
“When those who influence policy and technology design have a lower perception of privacy risk themselves, it contributes to a lack of investment in the kind of safeguards and protections that vulnerable communities both want and urgently need.”
points | 04.19.19
The Algorithmic Accountability Act is a great first step, and regulators who are tasked with implementing it should take sociotechnical frames into account.
“A sociological lens can help illuminate appropriate points of institutional intervention.”
points | 03.05.19
Data & Society Labor Engagement Lead Aiha Nguyen explains how technology is enabling new and more invasive forms of surveilling workers.
“As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve.”
report | 04.15.19
Digital Identity in the Migration & Refugee Context analyzes the challenges of continually collecting identity data from migrants & refugees.
testimony | 04.04.19
On April 4, 2019, Data & Society Executive Director Janet Haven and Postdoctoral Scholar Andrew Selbst testified before the New York City Council’s Committee on Technology about the Open Algorithms Law (Local Law 49 of 2018). They called for oversight of the Automated Decision Systems Task Force to ensure access to “details of ADS systems in use by specific agencies” and a public engagement process.
Please find Janet Haven and Andrew Selbst’s written testimony below.
Our names are Janet Haven and Andrew D. Selbst. We are the executive director and a postdoctoral scholar at the Data & Society Research Institute, an independent non-profit research center dedicated to studying the social and cultural impacts of data-driven and automated technologies. Over the past five years, Data & Society has focused on the social and legal impacts of automated decision-making and artificial intelligence, publishing research and advising policymakers and industry actors on issues such as algorithmic bias, explainability, transparency, and accountability more generally.
Government services and operations play a crucial role in the lives of New York City’s citizens. Transparency and accountability in a government’s use of automated decision-making systems matters. Across the country, automated decision-making systems based on nonpublic data sources and algorithmic models currently inform decision-making on policing, criminal justice, housing, child welfare, educational opportunities, and myriad other fundamental issues.
This Task Force was set up to begin the hard work of building transparent and accountable processes to ensure that the use of such systems in New York City is geared to just outcomes, rather than only those which are most efficient. The adoption of such systems requires a reevaluation of current approaches to due process and the adoption of appropriate safeguards. It may require entirely new approaches to accountability when the city uses automated systems, as many such systems, through their very design, can obscure or conceal policy or decision-making processes.
We at Data & Society lauded the decision to establish a Task Force focused on developing a better understanding of these issues. Indeed, we celebrated the city leadership’s prescience in being the first government in the nation to establish a much-needed evidence base regarding the inherent complexity accompanying ADS adoption across multiple departments. We have seen little evidence that the Task Force is living up to its potential. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body.
We want to make two main points in our testimony today.
First, for the Task Force to complete its mandate in any meaningful sense, it must have access to the details of ADS systems in use by specific agencies and the ability to work closely with representatives from across agencies using ADS. We urge that task force members be given immediate access to specific, agency-level automated decision-making systems currently in use, as well as to the leadership in those departments, and others with insight into the design and use of these systems.
Social context is essential to defining fair and just outcomes. The city is understood to be using ADS in such diverse contexts as housing, education, child services, and criminal justice. The very idea of a fair or just outcome is impossible to define or debate without reference to the social context. Understanding the different value tradeoffs in decisions about pretrial risk assessments tells you nothing whatsoever about school choice. What is fair, just, or accountable in public housing policy says nothing about what is fair, just, and accountable in child services. This ability to address technological systems within the social context where they are used is what makes the ADS Task Force so important, and potentially so powerful in defining real accountability measures.
The legislative mandate itself also demonstrates why the Task Force requires access to agency technologies. Under the enacting law, the purpose of the Task Force is to make recommendations particular to the City’s agencies. Specifically, the Task Force must make recommendations for procedures by which explanations of the decisions can be requested, biases can be detected, harms from biases can be redressed, the public can assess the ADS, and the systems and data can be archived. Each of these recommendations apply not to automated decision systems generally, but to “agency automated decision systems,” a term defined separately in the test of the law. Importantly, the law also mandates that the Task Force makes recommendations about “[c]riteria for identifying which agency automated decision systems” should be subject to these procedures. Thus, the legislative mandate makes clear that for the Task Force to do its work, it will require access to the technologies that city agencies currently use or plan to use, as well as the people in charge of their operation. Lacking this level of detail on actual agency-level use of automated decision-making systems, the recommendations can only be generic. Such generic recommendations will be ineffective because they will not be informative enough for the city to act on.
If the city wanted to find generic guidelines or recommendations for ADSs, it could have looked to existing scholarship on these issues instead of forming a Task Force. Indeed, there is an entire interdisciplinary field of scholarship that has emerged in the last several years, dedicated to the issues of Fairness, Accountability and Transparency (FAT*) in automated systems. This field has made significant strides in coming up with mathematical definitions for fairness that computers can parse, and creating myriad potential methods for bias reduction in automated systems.
But the academic work has fundamental limitations. Much of the research is, by necessity or due to limited access, based on small hypothetical scenarios—toy problems—rather than real-world applications of machine learning technology. This work is accomplished, as is characteristic of theoretical modeling, by stating assumptions about the world and datasets that are being used. In order to translate these solutions to the real world, researchers would have to know whether the datasets and other assumptions match the real-world scenarios.
Using information from city agencies, the task force has the ability to advance beyond the academic focus on toy problems devoid of social context and assess particular issues for systems used in practice. Without information about the systems in use, the Task Force’s recommendations will be limited to procedures at the greatest level of generality—things we already would guess, such as testing the system for bias or keeping it less complex so as to be explainable. But with information about these systems, the Task Force can examine the particular challenges and tradeoffs at issue. With community input and guidance, they can assess the appropriateness of different definitions of bias in a given context, and debate trade-offs between accuracy and explainability given specific social environments. The recommendations of the Task Force will only be useful if they are concrete and actionable, and that can only be achieved if they are allowed to examine the way ADS operate in practice with a view into *both* the technical and the social systems informing outcomes.
Second, we urge the Task Force to prioritize public engagement. Because social context is essential to defining fair and just outcomes, meaningful engagement with community stakeholders is fundamental to this process. Once the Task Force has access to detailed information about ADS systems in use, public listening sessions must be held to understand community experiences and concerns with the goal of using that feedback to shape the Task Force’s process going forward. Iteration and reviewing of recommendations with community stakeholders as the Task Force moves this work forward will be important to arriving at truly transparent, accountable and just outcomes.
We are here today because we continue to believe the Task Force has great potential. We strongly believe that the Task Force’s work needs to be undertaken thoughtfully and contextually, centering on cooperation, transparency, and public engagement. The Task Force’s goal needs to be offering actionable and concrete recommendations on the use of ADS in New York City government. We hope that the above testimony provides useful suggestions to move toward that goal.
 See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), 59.
 See Local Law No. 49 of 2018, Council Int. No. 1696-A of 2017 [hereinafter Local Law 49] (repeatedly referring to “agency automated decision systems”).
 Id. §§ 3(b)–(f)
 Id. § 1(a).
 Id. § 3(a) (emphasis added).
 See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), at 59.
explainer | 02.06.19
Technology enables employers to increasingly monitor their employees. This explainer by Alexandra Mateescu and Aiha Nguyen identifies four current trends in workplace monitoring and surveillance: prediction and flagging tools; biometrics and health data; remote monitoring and time-tracking; and gamification and algorithmic management.
Mateescu and Nguyen consider how each trend impacts workers and workplace dynamics. For instance, freelancers on Upwork can be tracked through their keystrokes, mouse clicks, and screenshots to measure work time for clients. Work that cannot be measured in this way (for example, group brainstorming or long-term planning) may be devalued or go uncompensated.
The authors observe that information asymmetries are deepening as the boundaries of workplace privacy are changing. Tracking metrics like health data, for instance, can make way for discrimination and raises concerns about consent. The type of data employers collect will determine which work is valued, how they evaluate performance, and how workers are classified and compensated.
This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Algorithmic Management in the Workplace. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.
explainer | 02.06.19
This explainer by Alexandra Mateescu and Aiha Nguyen defines algorithmic management and reviews how this concept challenges workers’ rights in sectors, including retail, the service industry, and delivery and logistics. The authors outline existing research on the ways that algorithmic management is manifesting across various labor industries, shifting workplace power dynamics, and putting workers at a disadvantage. It can enable increased surveillance and control while removing transparency.
Defined as “a diverse set of technology tools and techniques that structure the conditions of work and remotely manage workforces,” algorithmic management relies on data collection and worker surveillance to enable automated decision-making in real time. For example, an algorithm might decide and assign servers’ shifts.
Because companies aren’t “directly” managing their workers, algorithmic management makes it easier to classify workers as independent contractors, thus relieving companies of the pressure of providing standard worker benefits. Algorithmic management can provide avenues for bias and discrimination, while making it difficult to hold companies accountable. Companies ultimately benefit and continue to scale operations while cutting costs and labor.
This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Workplace Monitoring & Surveillance. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.
“AI in Context” shows how automated and AI technologies are reconfiguring work in small family-owned farms and grocery stores.
Data & Society Founder and President danah boyd and Researcher Madeleine Clare Elish lay the groundwork for questioning AI and ethics.
“Without comprehensively accounting for the strengths and weaknesses of technical practices, the work of ethics—which includes weighing the risks and benefits and potential consequences of an AI system—will be incomplete.”
Criminal Justice and Behavior | 11.23.18
Data & Society Fellow Cynthia Conti-Cook and co-authors assess the bias involved in risk assessment tools.
“In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them. The second layer covers biases embedded in data. Using data from a racially biased criminal justice system can lead to unmeasurable biases in both risk scores and outcome measures. The final layer engages conceptual problems with risk models: Is it fair to make criminal justice decisions about individuals based on groups?”
Fast Company | 11.20.18
Data & Society 2015-2016 Fellow Wilneida Negron connects her past social work to her current work as a political scientist and technologist.
“We are at the cusp of a new wave of technological thinking, one defined by a new mantra that is the opposite of Zuckerberg’s: ‘Move carefully and purposely, and embrace complexity.’ As part of this wave, a new, inclusive, and intersectional generation of people are using technology for the public interest. This new wave will help us prepare for a future where technical expertise coexists with empathy, humility, and perseverance.”
The Cancer Letter | 11.16.18
As AI becomes integrated into different facets of our lives, Data & Society Researcher Kadija Ferryman joins Robert A. Winn in considering what this means for the field of health.
“How can we bring together the excitement for the possibilities of AI in medicine with the sobering reality of stubborn health disparities that remain despite technological advances?”
In Content or Context Moderation? by Robyn Caplan illustrates the organizational contexts of three types of content moderation strategies by drawing from interviews with 10 major digital platforms.
Data Craft analyzes how bad actors manipulate metadata to create effective disinformation campaigns and provides tips for researchers and technology companies trying to spot this “data craft.”
MIT Technology Review | 10.23.18
Data & Society Health + Data Lead Mary Madden considers what patient privacy means in the current age of technology.
“In the era of data-driven medicine, systems for handling data need to avoid anything that feels like manipulation—whether it’s subtle or overt. At a minimum, the process of obtaining consent should be separated from the process of obtaining care.”
report | 10.17.18
Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech identifies the technologies, conditions, and tactics that enable today’s digital advertising infrastructure to be weaponized by political and anti-democratic actors.
Building off the research for her book Uberland: How Algorithms are Rewriting the Rules of Work, Data & Society Researcher Alex Rosenblat explains algorithmic management in the gig-economy.
“Data and algorithms are presented as objective, neutral, even benevolent: Algorithms gave us super-convenient food delivery services and personalized movie recommendations. But Uber and other ride-hailing apps have taken the way Silicon Valley uses algorithms and applied it to work, and that’s not always a good thing.”
SSRN | 10.11.18
How are AI technologies being integrated in health care? What are the broader implications of this integration? Data & Society Researcher Madeleine Clare Elish investigates.
“This paper examines the development of a machine learning-driven sepsis risk detection tool in a hospital Emergency Department in order to interrogate the contingent and deeply contextual ways in which AI technologies are likely be adopted in healthcare.”
In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.
The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.
Alternative Influence: Broadcasting the Reactionary Right on YouTube presents data from approximately 65 political influencers across 81 channels to identify the “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.
points | 09.14.18
On September 13th, Data & Society Founder and President danah boyd gave the keynote speech at the Online News Association Conference. Read the transcript of her talk on Points.
“Now, more than ever, we need a press driven by ideals determined to amplify what is most important for enabling an informed citizenry.”
points | 09.10.18
Increasingly, technology’s impact on infrastructure is becoming a health concern. In this Points piece, Data & Society Researchers Mikaela Pitcan, Alex Rosenblat, Mary Madden, and Kadija Ferryman tease out why this intersection warrants further research.
“However, there is an urgent need to understand the interdependencies between technology, infrastructure and health, and how these relationships affect Americans’ ability to live the healthiest lives possible. How can we support design, decision-making, and governance of our infrastructures in order to ensure more equitable health outcomes for all Americans?”
Slate | 08.27.18
2017-2018 Fellow Jeanna Matthews and Research Analyst Kinjal Dave respond to Deji Olukotun’s story about an algorithmic tennis match.
“The answer can’t be derived from the past alone: It depends on what we collectively decide about the future, about what justice looks like, about leveling the playing field in sports and in life. As in Olukotun’s story, humans and computers will be working together to pick winners and losers. We need to collectively decide on and enforce the rules they will follow. We need the ability to understand, challenge, and audit the decisions. A level playing field won’t be the future unless we insist on it.”
Slate | 08.13.18
Drawing on conclusions from the Data & Society report Beyond Disruption, Researcher Alexandra Mateescu discusses surveillance of domestic care workers online.
“Online marketplaces may not be the root cause of individual employers’ biases, but their design is not neutral. They are built with a particular archetype of what an “entrepreneurial” domestic worker looks like—one who feels at home in the world of apps, social media, and online self-branding—and ultimately replicates and can even exacerbate many of the divisions that came with our predigital workplaces. As platform companies gain growing power over the hiring processes of a whole industry, they will need to actively work against the embedded inequalities in the markets they now mediate.”
Other | 08.09.17
In this article, Data & Society Founder and President danah boyd and Researcher Madeleine Clare Elish break down the “magic” narrative around AI systems.
“‘Big Data’ and ‘artificial intelligence’ have captured the public imagination and are profoundly shaping social, economic, and political spheres. Through an interrogation of the histories, perceptions, and practices that shape these technologies, we problematize the myths that animate the supposed “magic” of these systems.”
report | 07.18.18
This report by Data & Society Researcher Bonnie Tijerina and Michael Zimmer is the culmination of gatherings that brought together different privacy practitioners to discuss digital privacy for libraries.
“While the recent surge in privacy-related activities within the library community is welcome, we see a gap in the conversations we are having about privacy and our digital presence – a knowledge gap, a lack of shared vocabulary, disparate skill sets, and varied understanding. This gap prevents inclusion across the profession and lacks clarity for those responsible for building tools and licensing products.”
On Friday, June 8, the second-annual Future Perfect gathering at Data & Society brought together individuals from a variety of world-building disciplines—from art and fiction to architecture and science—to explore the uses, abuses, and paradoxes of speculative futures.
How can we trace the spread of disinformation by tracking metadata? Data & Society Research Affiliate Amelia Acker explains.
“One way of more fully understanding the data craftwork of disinformation on social media platforms is by reading the metadata just as closely as the algorithms do.”
In an op-ed for The New York Times, Data & Society Researcher Alex Rosenblat shatters the narrative that Uber encapsulates the entire gig-economy.
“But this industry has, until recently, operated largely informally, with jobs secured by word-of-mouth. That’s changing, as employers are increasingly turning to Uber-like services to find nannies, housecleaners and other care workers. These new gig economy companies, while making it easier for some people to find short-term work, have created hardships for others, and may leave many experienced care workers behind.”
In this reading list, Data & Society Researcher Alexandra Mateescu and Postdoctoral Scholar Julia Ticona provide a pathway for deeper investigations into themes such as gender inequality and algorithmic visibility in the gig economy.
“This list is meant for readers of Beyond Disruption who want to dig more deeply into some of the key areas explored in its pages. It isn’t meant to be exhaustive, but rather give readers a jumping off point for their own investigations.”
Drawn from the experiences of U.S. ridehail, care, and cleaning platform workers, “Beyond Disruption” demonstrates how technology reshapes the future of labor.
Data & Society INFRA Lead Ingrid Burrington traces the history of Silicon Valley and its residents.
“Now San Jose has an opportunity to lift up these workers placed at the bottom of the tech industry as much as the wealthy heroes at its top. If Google makes good on the “deep listening” it has promised, and if San Jose residents continue to challenge the company’s vague promises, the Diridon project might stand a chance of putting forth a genuinely visionary alternative to the current way of life in the Santa Clara Valley and the founder-centric, organized-labor-allergic ideology of Silicon Valley. If it does, San Jose might yet justify its claim to be the center of Silicon Valley—if not as its capital, at least as its heart.”
For Points, Data & Society Postdoctoral Scholar Caroline Jack reviews the history of advertising imaginaries.
“The question of what protections ads themselves deserve, and to what degree people deserve to be protected from ads, is ripe for reconsideration.”