featured


filtered by: our work


Data & Society Labor Engagement Lead Aiha Nguyen explains how technology is enabling new and more invasive forms of surveilling workers.

“As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve.”


report | 04.15.19

Digital Identity in the Migration & Refugee Context

Mark Latonero, Keith Hiatt, Antonella Napolitano, Giulia Clericetti, Melanie Penagos

Digital Identity in the Migration & Refugee Context analyzes the challenges of continually collecting identity data from migrants & refugees.


On April 4, 2019, Data & Society Executive Director Janet Haven and Postdoctoral Scholar Andrew Selbst testified before the New York City Council’s Committee on Technology about the Open Algorithms Law (Local Law 49 of 2018). They called for oversight of the Automated Decision Systems Task Force to ensure access to “details of ADS systems in use by specific agencies” and a public engagement process. 

Other testimonies include Task Force members Solon Barocas and Julia Stoyanovich andBetaNYC Executive Director Noel Hidalgo. For a video of the full hearing, click here.

Please find Janet Haven and Andrew Selbst’s written testimony below. 


Our names are Janet Haven and Andrew D. Selbst. We are the executive director and a postdoctoral scholar at the Data & Society Research Institute, an independent non-profit research center dedicated to studying the social and cultural impacts of data-driven and automated technologies. Over the past five years, Data & Society has focused on the social and legal impacts of automated decision-making and artificial intelligence, publishing research and advising policymakers and industry actors on issues such as algorithmic bias, explainability, transparency, and accountability more generally.

Government services and operations play a crucial role in the lives of New York City’s citizens. Transparency and accountability in a government’s use of automated decision-making systems matters. Across the country, automated decision-making systems based on nonpublic data sources and algorithmic models currently inform decision-making on policing, criminal justice, housing, child welfare, educational opportunities, and myriad other fundamental issues.

This Task Force was set up to begin the hard work of building transparent and accountable processes to ensure that the use of such systems in New York City is geared to just outcomes, rather than only those which are most efficient.  The adoption of such systems requires a reevaluation of current approaches to due process and the adoption of appropriate safeguards. It may require entirely new approaches to accountability when the city uses automated systems, as many such systems, through their very design, can obscure or conceal policy or decision-making processes.

We at Data & Society lauded the decision to establish a Task Force focused on developing a better understanding of these issues. Indeed, we celebrated the city leadership’s prescience in being the first government in the nation to establish a much-needed evidence base regarding the inherent complexity accompanying ADS adoption across multiple departments.  We have seen little evidence that the Task Force is living up to its potential. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body.

We want to make two main points in our testimony today.

First, for the Task Force to complete its mandate in any meaningful sense, it must have access to the details of ADS systems in use by specific agencies and the ability to work closely with representatives from across agencies using ADS.  We urge that task force members be given immediate access to specific, agency-level automated decision-making systems currently in use, as well as to the leadership in those departments, and others with insight into the design and use of these systems.

Social context is essential to defining fair and just outcomes.[1] The city is understood to be using ADS in such diverse contexts as housing, education, child services, and criminal justice. The very idea of a fair or just outcome is impossible to define or debate without reference to the social context. Understanding the different value tradeoffs in decisions about pretrial risk assessments tells you nothing whatsoever about school choice. What is fair, just, or accountable in public housing policy says nothing about what is fair, just, and accountable in child services. This ability to address technological systems within the social context where they are used is what makes the ADS Task Force so important, and potentially so powerful in defining real accountability measures.

The legislative mandate itself also demonstrates why the Task Force requires access to agency technologies. Under the enacting law, the purpose of the Task Force is to make recommendations particular to the City’s agencies.[2]  Specifically, the Task Force must make recommendations for procedures by which explanations of the decisions can be requested, biases can be detected, harms from biases can be redressed, the public can assess the ADS, and the systems and data can be archived.[3] Each of these recommendations apply not to automated decision systems generally, but to “agency automated decision systems,” a term defined separately in the test of the law.[4] Importantly, the law also mandates that the Task Force makes recommendations about “[c]riteria for identifying which agency automated decision systems” should be subject to these procedures.[5]  Thus, the legislative mandate makes clear that for the Task Force to do its work, it will require access to the technologies that city agencies currently use or plan to use, as well as the people in charge of their operation. Lacking this level of detail on actual agency-level use of automated decision-making systems, the recommendations can only be generic. Such generic recommendations will be ineffective because they will not be informative enough for the city to act on.

If the city wanted to find generic guidelines or recommendations for ADSs, it could have looked to existing scholarship on these issues instead of forming a Task Force. Indeed, there is an entire interdisciplinary field of scholarship that has emerged in the last several years, dedicated to the issues of Fairness, Accountability and Transparency (FAT*) in automated systems.[6] This field has made significant strides in coming up with mathematical definitions for fairness that computers can parse, and creating myriad potential methods for bias reduction in automated systems.

But the academic work has fundamental limitations. Much of the research is, by necessity or due to limited access, based on small hypothetical scenarios—toy problems—rather than real-world applications of machine learning technology.[7] This work is accomplished, as is characteristic of theoretical modeling, by stating assumptions about the world and datasets that are being used. In order to translate these solutions to the real world, researchers would have to know whether the datasets and other assumptions match the real-world scenarios.

Using information from city agencies, the task force has the ability to advance beyond the academic focus on toy problems devoid of social context and assess particular issues for systems used in practice. Without information about the systems in use, the Task Force’s recommendations will be limited to procedures at the greatest level of generality—things we already would guess, such as testing the system for bias or keeping it less complex so as to be explainable. But with information about these systems, the Task Force can examine the particular challenges and tradeoffs at issue. With community input and guidance, they can assess the appropriateness of different definitions of bias in a given context, and debate trade-offs between accuracy and explainability given specific social environments.  The recommendations of the Task Force will only be useful if they are concrete and actionable, and that can only be achieved if they are allowed to examine the way ADS operate in practice with a view into *both* the technical and the social systems informing outcomes.

Second, we urge the Task Force to prioritize public engagement. Because social context is essential to defining fair and just outcomes, meaningful engagement with community stakeholders is fundamental to this process. Once the Task Force has access to detailed information about ADS systems in use, public listening sessions must be held to understand community experiences and concerns with the goal of using that feedback to shape the Task Force’s process going forward. Iteration and reviewing of recommendations with community stakeholders as the Task Force moves this work forward will be important to arriving at truly transparent, accountable and just outcomes.

We are here today because we continue to believe the Task Force has great potential. We strongly believe that the Task Force’s work needs to be undertaken thoughtfully and contextually, centering on cooperation, transparency, and public engagement.  The Task Force’s goal needs to be offering actionable and concrete recommendations on the use of ADS in New York City government. We hope that the above testimony provides useful suggestions to move toward that goal.

Thank you.


[1] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), 59.

[2] See Local Law No. 49 of 2018, Council Int. No. 1696-A of 2017 [hereinafter Local Law 49] (repeatedly referring to “agency automated decision systems”).

[3] Id. §§ 3(b)–(f)

[4] Id. § 1(a).

[5] Id. § 3(a) (emphasis added).

[6] ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), https://fatconference.org/

[7] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), at 59.


explainer | 02.06.19

Explainer: Workplace Monitoring & Surveillance

Alexandra Mateescu, Aiha Nguyen

Technology enables employers to increasingly monitor their employees. This explainer by Alexandra Mateescu and Aiha Nguyen identifies four current trends in workplace monitoring and surveillance: prediction and flagging tools; biometrics and health data; remote monitoring and time-tracking; and gamification and algorithmic management.

Mateescu and Nguyen consider how each trend impacts workers and workplace dynamics. For instance, freelancers on Upwork can be tracked through their keystrokes, mouse clicks, and screenshots to measure work time for clients. Work that cannot be measured in this way (for example, group brainstorming or long-term planning) may be devalued or go uncompensated.

The authors observe that information asymmetries are deepening as the boundaries of workplace privacy are changing. Tracking metrics like health data, for instance, can make way for discrimination and raises concerns about consent. The type of data employers collect will determine which work is valued, how they evaluate performance, and how workers are classified and compensated.


This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Algorithmic Management in the Workplace. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.


explainer | 02.06.19

Explainer: Algorithmic Management in the Workplace

Alexandra Mateescu, Aiha Nguyen

This explainer by Alexandra Mateescu and Aiha Nguyen defines algorithmic management and reviews how this concept challenges workers’ rights in sectors, including retail, the service industry, and delivery and logistics. The authors outline existing research on the ways that algorithmic management is manifesting across various labor industries, shifting workplace power dynamics, and putting workers at a disadvantage. It can enable increased surveillance and control while removing transparency.

Defined as “a diverse set of technology tools and techniques that structure the conditions of work and remotely manage workforces,” algorithmic management relies on data collection and worker surveillance to enable automated decision-making in real time. For example, an algorithm might decide and assign servers’ shifts.

Because companies aren’t “directly” managing their workers, algorithmic management makes it easier to classify workers as independent contractors, thus relieving companies of the pressure of providing standard worker benefits. Algorithmic management can provide avenues for bias and discrimination, while making it difficult to hold companies accountable. Companies ultimately benefit and continue to scale operations while cutting costs and labor.


This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Workplace Monitoring & Surveillance. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.


report | 01.30.19

AI in Context

Alexandra Mateescu, Madeleine Clare Elish

“AI in Context” shows how automated and AI technologies are reconfiguring work in small family-owned farms and grocery stores.

 


New America | 11.26.18

Don’t Believe Every AI You See

Madeleine Clare Elish, danah boyd

Data & Society Founder and President danah boyd and Researcher Madeleine Clare Elish lay the groundwork for questioning AI and ethics.

“Without comprehensively accounting for the strengths and weaknesses of technical practices, the work of ethics—which includes weighing the risks and benefits and potential consequences of an AI system—will be incomplete.”


Criminal Justice and Behavior | 11.23.18

Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook, Julie Ciccolini

Data & Society Fellow Cynthia Conti-Cook and co-authors assess the bias involved in risk assessment tools.

“In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them. The second layer covers biases embedded in data. Using data from a racially biased criminal justice system can lead to unmeasurable biases in both risk scores and outcome measures. The final layer engages conceptual problems with risk models: Is it fair to make criminal justice decisions about individuals based on groups?”


Data & Society 2015-2016 Fellow Wilneida Negron connects her past social work to her current work as a political scientist and technologist.

“We are at the cusp of a new wave of technological thinking, one defined by a new mantra that is the opposite of Zuckerberg’s: ‘Move carefully and purposely, and embrace complexity.’ As part of this wave, a new, inclusive, and intersectional generation of people are using technology for the public interest. This new wave will help us prepare for a future where technical expertise coexists with empathy, humility, and perseverance.”


As AI becomes integrated into different facets of our lives, Data & Society Researcher Kadija Ferryman joins Robert A. Winn in considering what this means for the field of health.

“How can we bring together the excitement for the possibilities of AI in medicine with the sobering reality of stubborn health disparities that remain despite technological advances?”


In Content or Context Moderation? by Robyn Caplan illustrates the organizational contexts of three types of content moderation strategies by drawing from interviews with 10 major digital platforms.


report | 11.05.18

Data Craft

Amelia Acker

Data Craft analyzes how bad actors manipulate metadata to create effective disinformation campaigns and provides tips for researchers and technology companies trying to spot this “data craft.”


Data & Society Health + Data Lead Mary Madden considers what patient privacy means in the current age of technology.

“In the era of data-driven medicine, systems for handling data need to avoid anything that feels like manipulation—whether it’s subtle or overt. At a minimum, the process of obtaining consent should be separated from the process of obtaining care.”


report | 10.17.18

Weaponizing the Digital Influence Machine

Anthony Nadler, Matthew Crain, and Joan Donovan

Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech identifies the technologies, conditions, and tactics that enable today’s digital advertising infrastructure to be weaponized by political and anti-democratic actors.


The New York Times | 10.12.18

When Your Boss Is an Algorithm

Alex Rosenblat

Building off the research for her book Uberland: How Algorithms are Rewriting the Rules of Work, Data & Society Researcher Alex Rosenblat explains algorithmic management in the gig-economy.

“Data and algorithms are presented as objective, neutral, even benevolent: Algorithms gave us super-convenient food delivery services and personalized movie recommendations. But Uber and other ride-hailing apps have taken the way Silicon Valley uses algorithms and applied it to work, and that’s not always a good thing.”


How are AI technologies being integrated in health care? What are the broader implications of this integration? Data & Society Researcher Madeleine Clare Elish investigates.

“This paper examines the development of a machine learning-driven sepsis risk detection tool in a hospital Emergency Department in order to interrogate the contingent and deeply contextual ways in which AI technologies are likely be adopted in healthcare.”


In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.

The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.


report | 09.18.18

Alternative Influence

Rebecca Lewis

Alternative Influence: Broadcasting the Reactionary Right on YouTube presents data from approximately 65 political influencers across 81 channels to identify the “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.


On September 13th, Data & Society Founder and President danah boyd gave the keynote speech at the Online News Association Conference. Read the transcript of her talk on Points.

“Now, more than ever, we need a press driven by ideals determined to amplify what is most important for enabling an informed citizenry.”


points | 09.10.18

Technology’s Impact on Infrastructure is a Health Concern

Mikaela Pitcan, Alex Rosenblat, Mary Madden, Kadija Ferryman

Increasingly, technology’s impact on infrastructure is becoming a health concern. In this Points piece, Data & Society Researchers Mikaela Pitcan, Alex Rosenblat, Mary Madden, and Kadija Ferryman tease out why this intersection warrants further research.

“However, there is an urgent need to understand the interdependencies between technology, infrastructure and health, and how these relationships affect Americans’ ability to live the healthiest lives possible. How can we support design, decision-making, and governance of our infrastructures in order to ensure more equitable health outcomes for all Americans?”


2017-2018 Fellow Jeanna Matthews and Research Analyst Kinjal Dave respond to Deji Olukotun’s story about an algorithmic tennis match.

 “The answer can’t be derived from the past alone: It depends on what we collectively decide about the future, about what justice looks like, about leveling the playing field in sports and in life. As in Olukotun’s story, humans and computers will be working together to pick winners and losers. We need to collectively decide on and enforce the rules they will follow. We need the ability to understand, challenge, and audit the decisions. A level playing field won’t be the future unless we insist on it.”


Drawing on conclusions from the Data & Society report Beyond Disruption, Researcher Alexandra Mateescu discusses surveillance of domestic care workers online.

“Online marketplaces may not be the root cause of individual employers’ biases, but their design is not neutral. They are built with a particular archetype of what an “entrepreneurial” domestic worker looks like—one who feels at home in the world of apps, social media, and online self-branding—and ultimately replicates and can even exacerbate many of the divisions that came with our predigital workplaces. As platform companies gain growing power over the hiring processes of a whole industry, they will need to actively work against the embedded inequalities in the markets they now mediate.”


Other | 08.09.17

Situating methods in the magic of Big Data and AI

Madeleine Clare Elish, danah boyd

In this article, Data & Society Founder and President danah boyd and Researcher Madeleine Clare Elish break down the “magic” narrative around AI systems.

“‘Big Data’ and ‘artificial intelligence’ have captured the public imagination and are profoundly shaping social, economic, and political spheres. Through an interrogation of the histories, perceptions, and practices that shape these technologies, we problematize the myths that animate the supposed “magic” of these systems.”


This report by Data & Society Researcher Bonnie Tijerina and Michael Zimmer is the culmination of gatherings that brought together different privacy practitioners to discuss digital privacy for libraries.

“While the recent surge in privacy-related activities within the library community is welcome, we see a gap in the conversations we are having about privacy and our digital presence – a knowledge gap, a lack of shared vocabulary, disparate skill sets, and varied understanding. This gap prevents inclusion across the profession and lacks clarity for those responsible for building tools and licensing products.”


conference | 08.18.18

Future Perfect 2018

Curated by Ingrid Burrington

On Friday, June 8, the second-annual Future Perfect gathering at Data & Society brought together individuals from a variety of world-building disciplines—from art and fiction to architecture and science—to explore the uses, abuses, and paradoxes of speculative futures.


How can we trace the spread of disinformation by tracking metadata? Data & Society Research Affiliate Amelia Acker explains.

“One way of more fully understanding the data craftwork of disinformation on social media platforms is by reading the metadata just as closely as the algorithms do.”


In an op-ed for The New York Times, Data & Society Researcher Alex Rosenblat shatters the narrative that Uber encapsulates the entire gig-economy.

“But this industry has, until recently, operated largely informally, with jobs secured by word-of-mouth. That’s changing, as employers are increasingly turning to Uber-like services to find nannies, housecleaners and other care workers. These new gig economy companies, while making it easier for some people to find short-term work, have created hardships for others, and may leave many experienced care workers behind.”


points | 06.27.18

5 Star Service: A curated reading list

Alexandra Mateescu, Julia Ticona

In this reading list, Data & Society Researcher Alexandra Mateescu and Postdoctoral Scholar Julia Ticona provide a pathway for deeper investigations into themes such as gender inequality and algorithmic visibility in the gig economy.

“This list is meant for readers of Beyond Disruption who want to dig more deeply into some of the key areas explored in its pages. It isn’t meant to be exhaustive, but rather give readers a jumping off point for their own investigations.”


report | 06.27.18

Beyond Disruption

Julia Ticona, Alexandra Mateescu, Alex Rosenblat

Drawn from the experiences of U.S. ridehail, care, and cleaning platform workers, “Beyond Disruption” demonstrates how technology reshapes the future of labor.


Data & Society INFRA Lead Ingrid Burrington traces the history of Silicon Valley and its residents.

“Now San Jose has an opportunity to lift up these workers placed at the bottom of the tech industry as much as the wealthy heroes at its top. If Google makes good on the “deep listening” it has promised, and if San Jose residents continue to challenge the company’s vague promises, the Diridon project might stand a chance of putting forth a genuinely visionary alternative to the current way of life in the Santa Clara Valley and the founder-centric, organized-labor-allergic ideology of Silicon Valley. If it does, San Jose might yet justify its claim to be the center of Silicon Valley—if not as its capital, at least as its heart.”


For Points, Data & Society Postdoctoral Scholar Caroline Jack reviews the history of advertising imaginaries.

“The question of what protections ads themselves deserve, and to what degree people deserve to be protected from ads, is ripe for reconsideration.”


In this Medium post, Founder and President danah boyd reflects on the current state of journalism and offers next steps.

“Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.”


Data & Society Research Analyst Melanie Penagos summarizes three blogposts that came as a result of Data & Society’s AI & Human Rights Workshop in April 2018.

“Following Data & Society’s AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony.”


How will the introduction of AI into the field of medicine affect the doctor-patient relationship? Data & Society Fellow Claudia Haupt identifies some legal questions we should be asking.

“I contend that AI will not entirely replace human doctors (for now) due to unresolved issues in transposing diagnostics to a non-human context, including both limits on the technical capability of existing AI and open questions regarding legal frameworks such as professional duty and informed consent.”


How do people decide what to trust? Data & Society Postdoctoral Scholar Francesca Tripodi shares insights from her research into conservative news practices.

“While not all Christians are conservative nor all conservatives religious, there is a clear connection between how the process of scriptural inference trickles down into conservative methods of inquiry. Favoring the original text of the Constitution is closely tied to the practices of ‘constitutional conservatism,’ and currently members in all three branches of the U.S. government rely on practices of scriptural inference to make important political decisions.”



The Guardian | 06.01.18

The Case for Quarantining Extremist Ideas

danah boyd, Joan Donovan

Data & Society President and Founder danah boyd and Media Manipulation Research Lead Joan Donovan challenge newsrooms to practice “strategic silence” to avoid amplifying extremist messaging.

“Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.”


magazine article | 05.24.18

Effortless Slippage

Ingrid Burrington

In e-flux, Data & Society INFRA Lead Ingrid Burrington contemplates the maps of the internet.

“The historical maps made of the internet—and, later, the maps of the world made by the internet—are both reflection and instrument of the ideologies and entanglements of the networked world. They are one way we might navigate the premise of the networked citizen and her obligations to her fellow travelers in the networked landscape.”


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.