featured


Stop the Presses?

American Behavioral Scientist | 09.29.19

Joan Donovan, danah boyd

In this journal article, Data & Society Affiliate Joan Donovan and Founder and President danah boyd take a historical look at the news media ecosystem and the deployment of strategic silence, and then conceptualize a new approach—”strategic amplification.”

“Drawing on the history of strategic silence, we argue for a new editorial approach—’strategic amplification’—which requires both news media organizations and platform companies to develop and employ best practices for ensuring responsibility and accountability when producing news content and the algorithmic systems that help spread it.”


Data Voids

report | 10.29.19

Michael Golebiewski, danah boyd

Data Voids demonstrates how manipulators expose people to problematic content by exploiting search engine results.


Drawing on insights from her co-authored report Deepfakes and Cheap Fakes, Data & Society Affiliate Britt Paris emphasizes who will be most negatively impacted by deepfake technologies.

“What is far too little discussed are the ways in which faked images and videos are already wielded as weapons against women, people of color and those questioning powerful systems.”


Deepfakes and Cheap Fakes

report | 09.18.19

Britt Paris, Joan Donovan

Deepfakes and Cheap Fakes traces decades of AV manipulation to demonstrate how evolving technologies aid consolidations of power in society.


Data & Society Researchers Jacob Metcalf, Emanuel Moss, and Founder and President danah boyd investigate the tensions that arise when adapting ethics initiatives at Silicon Valley tech companies.

“By talking with people who are at the forefront of thinking through ethics from within the technology sector, we found that the commitment to ethics is in tension with—and at risk of being absorbed within—broader and longer-standing industry commitments to meritocracy, technological solutionism, and market fundamentalism.”


Source Hacking: Media Manipulation in Practice

report | 09.04.19

Joan Donovan, Brian Friedberg

Source Hacking details the techniques used by media manipulators to target journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public.


How to Cite Like a Badass Tech Feminist Scholar of Color

points | 08.22.19

Rigoberto Lara Guzmán, Sareeta Amrute

How can citation practices be used as a strategy to decolonize tech research, ask Data & Society Events Production Assistant Rigoberto Lara Guzmán and Director of Research Sareeta Amrute in their new zine.

“So, make it a habit to do a ‘badass feminist tech scholar of color’ scan on everything your write, every speech you are about to give, and all those emails you are about to answer. Ask yourself, for each topic you present, each yes or no you give to a request, where are the women of color? Who can I suggest who would be a better person than me to be the expert here? Who do I want to be in community with?”


illustration of eye with fingerprint pupil

Technology can often do more harm than good in humanitarian situations. In an op-ed for The New York Times, Research Lead Mark Latonero argues against surveillance humanitarianism.

“Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.”


This past year, 2018-2019 Data & Society Fellow Cynthia Conti-Cook tackled an aspect of the criminal justice system lacking data: police misconduct. Her talk explores how this data gap came to be through police union claims to the Right to be Forgotten. This raises important lessons about how government actors exploit privacy rhetoric to cover up rights violations.

Listen to the podcast with transcript at:
https://listen.datasociety.net/exposing-police-misconduct/.

Cynthia Conti-Cook is a staff attorney at the New York City’s Legal Aid Society, Special Litigation Unit, where she oversees the Cop Accountability Project and Database, leads impact litigation and law reform projects on issues involving policing, data collection, risk assessment instruments, and the criminal justice system generally. She has presented as a panelist and trainer at many national, New York state, and New York City venues on topics of police misconduct, technology in the criminal justice system, and risk assessment instruments.


Cryptoparty as Rent Party

podcast | 06.19.19

Jasmine E. McNealy

2018-2019 Data & Society Fellow Jasmine E. McNealy compares Cryptoparties to the goals and aspirations of the famous rent parties of the Harlem Renaissance. Both represent communities filling in the gaps in infrastructure to support each other. While the rent party helped pay the rent through nights of celebration, jazz, and revelry, McNealy’s research shows that the Cryptoparty strives for a similar freedom through educating community members how to safely navigate harmful surveillance technologies.

Listen to the podcast with transcript at:
https://listen.datasociety.net/cryptoparty-as-rent-party/.

Jasmine E. McNealy is an assistant professor of telecommunication at the University of Florida College of Journalism and Communications. She studies information, communication, and technology with a view toward influencing law and policy. Her research focuses on privacy, online media, communities, and culture.


2018-19 Data & Society Fellow Jessie Daniels offers strategies for racial literacy in tech grounded in intellectual understanding, emotional intelligence, and a commitment to take action. In this podcast, Daniels describes how the biggest barrier to racial literacy in tech is “thinking that race doesn’t matter in tech.” She argues that “without racial literacy in tech, without a specific and conscious effort to address race, we will certainly be recreating a high-tech Jim Crow: a segregated, divided, unequal future, sped-up, spread out, and automated through algorithms, AI, and machine learning.”

Listen to the podcast with transcript at:
https://listen.datasociety.net/why-now-is-the-time-for-racial-literacy-in-tech/.

Jessie Daniels, PhD is a Professor at Hunter College (Sociology) and at The Graduate Center, CUNY (Africana Studies, Critical Social Psychology, and Sociology). She earned her PhD from the University of Texas-Austin and held a Charles Phelps Taft postdoctoral fellowship at University of Cincinnati. Her main area of interest is in race and digital media technologies; she is an internationally recognized expert on Internet manifestations of racism. Daniels is the author or editor of five books and has bylines at The New York Times, DAME, The Establishment, Entropy, and a regular column at Huffington Post.

Her recent paper, “Advancing Racial Literacy in Tech,” co-authored with 2018-19 Fellow Mutale Nkonde and 2017-18 Fellow Darakhshan Mir, can be found at http://www.racialliteracy.tech.


As part of a series of lightning talks based on the culminating work of our 2018-2019 Fellows cohort, Chancey Fleet presents “Dark Patterns in Accessibility Tech.” In this talk she shares her experiences navigating everyday web transactions as a Blind individual. Fleet identifies the cost of not designing with accessibility in mind, and encourages product makers to bring more diverse experiences into the planning and implementation of their products.

Listen to the podcast with transcript at:
https://listen.datasociety.net/dark-patterns-in-accessibility-tech/.

Chancey Fleet, a Brooklyn-based accessibility advocate, coordinates technology education programs at the New York Public Library’s Andrew Heiskell Braille and Talking Book Library. Fleet was recognized as a 2017 Library Journal Mover and Shaker. She writes and presents to disability rights groups, policy-makers, and professionals about the intersections of disability and technology. During her fellowship at Data & Society, Fleet worked to advance public understanding of and to explore best practices for visual interpreter services as well as other technologies for accessibility whose implications resonate with the broader global conversations about digital equity, data ethics, and privacy.

She proudly serves as the vice president of the National Federation of the Blind of New York.


Research Analyst Kinjal Dave urges us to move past the individual framing of “bias” to critically examine broader socio-technical systems.

“When we stop overusing the word ‘bias,’ we can begin to use language that has been designed to theorize at the level of structural oppression.”


Advancing Racial Literacy in Tech

paper | 05.22.19

Jessie Daniels, Mutale Nkonde, Darakhshan Mir

How can we do less harm to communities of color with the technology we create?


In their new paper Advancing Racial Literacy in Tech, Data & Society 2018-19 Fellows Jessie Daniels and Mutale Nkonde and 2017-18 Fellow Darakhshan Mir urge tech companies to adopt racial literacy practices in order to break out of old patterns.

Conceived and launched under Data & Society’s fellowship program, this paper moves past conversations of implicit bias to think about racism in tech at a systems-level. The authors offer strategies grounded in intellectual understanding, emotional intelligence, and a commitment to take action.

“The real goal of building capacity for racial literacy in tech is to imagine a different world, one where we can break free from old patterns. This will take real leadership to take this criticism seriously and a willingness to assess the role that tech products, company culture, and supply chain practices may have in perpetuating structural racism.”

To follow the project and learn more, visit https://racialliteracy.tech/.


When Humans Attack

points | 05.14.19

Madeleine Clare Elish, Elizabeth Watkins

As AI becomes integrated into our everyday lives, how might it be gamed by humans, and how should we reconceptualize our notions of security?

“It is imperative to leverage a socio-technical frame to conceptualize safe and secure AI.”


Silicon Valley Nationalism

points | 05.08.19

Alex Rosenblat

In this Points essay, Researcher Alex Rosenblat connects Uber’s ideology to epistemological fragmentation in the U.S.

“Uber’s technology ideology comes from Silicon Valley, and how that becomes entrenched in law and practice is a microcosm of a larger political battle for power and governance.”


The Legislation That Targets The Racist Impacts of Tech

The New York Times | 05.07.19

Margot E. Kaminski and Andrew D. Selbst

The Algorithmic Accountability Act is a step forward, but there’s still room for improvement. Postdoctoral Scholar Andrew Selbst and Margot Kaminski explain.

“The bill is a meaningful first step in addressing the problems with algorithmic decision-making. Companies must be pushed to consider and document what goes into algorithm design. They should be pushed, too, to come up with solutions. But the bill is lacking in three main areas.”


In this article, Research Lead Madeleine Clare Elish investigates who bears the responsibility when an automated system fails.

“Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component—accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.”


In this op-ed for The New York Times, Data & Society Research Lead Mary Madden argues that there is no “one size fits all” solution to privacy concerns in the digital age.

“When those who influence policy and technology design have a lower perception of privacy risk themselves, it contributes to a lack of investment in the kind of safeguards and protections that vulnerable communities both want and urgently need.”


Accountable Algorithmic Futures

points | 04.19.19

Andrew Selbst, Madeleine Clare Elish, Mark Latonero

The Algorithmic Accountability Act is a great first step, and regulators who are tasked with implementing it should take sociotechnical frames into account.

“A sociological lens can help illuminate appropriate points of institutional intervention.”


Data & Society Labor Engagement Lead Aiha Nguyen explains how technology is enabling new and more invasive forms of surveilling workers.

“As technology expands the scope and scale of what can be done with surveillance tools, workplace protections must also evolve.”


Digital Identity in the Migration & Refugee Context

report | 04.15.19

Mark Latonero, Keith Hiatt, Antonella Napolitano, Giulia Clericetti, Melanie Penagos

Digital Identity in the Migration & Refugee Context analyzes the challenges of continually collecting identity data from migrants & refugees.


On April 4, 2019, Data & Society Executive Director Janet Haven and Postdoctoral Scholar Andrew Selbst testified before the New York City Council’s Committee on Technology about the Open Algorithms Law (Local Law 49 of 2018). They called for oversight of the Automated Decision Systems Task Force to ensure access to “details of ADS systems in use by specific agencies” and a public engagement process. 

Other testimonies include Task Force members Solon Barocas and Julia Stoyanovich andBetaNYC Executive Director Noel Hidalgo. For a video of the full hearing, click here.

Please find Janet Haven and Andrew Selbst’s written testimony below. 


Our names are Janet Haven and Andrew D. Selbst. We are the executive director and a postdoctoral scholar at the Data & Society Research Institute, an independent non-profit research center dedicated to studying the social and cultural impacts of data-driven and automated technologies. Over the past five years, Data & Society has focused on the social and legal impacts of automated decision-making and artificial intelligence, publishing research and advising policymakers and industry actors on issues such as algorithmic bias, explainability, transparency, and accountability more generally.

Government services and operations play a crucial role in the lives of New York City’s citizens. Transparency and accountability in a government’s use of automated decision-making systems matters. Across the country, automated decision-making systems based on nonpublic data sources and algorithmic models currently inform decision-making on policing, criminal justice, housing, child welfare, educational opportunities, and myriad other fundamental issues.

This Task Force was set up to begin the hard work of building transparent and accountable processes to ensure that the use of such systems in New York City is geared to just outcomes, rather than only those which are most efficient.  The adoption of such systems requires a reevaluation of current approaches to due process and the adoption of appropriate safeguards. It may require entirely new approaches to accountability when the city uses automated systems, as many such systems, through their very design, can obscure or conceal policy or decision-making processes.

We at Data & Society lauded the decision to establish a Task Force focused on developing a better understanding of these issues. Indeed, we celebrated the city leadership’s prescience in being the first government in the nation to establish a much-needed evidence base regarding the inherent complexity accompanying ADS adoption across multiple departments.  We have seen little evidence that the Task Force is living up to its potential. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body.

We want to make two main points in our testimony today.

First, for the Task Force to complete its mandate in any meaningful sense, it must have access to the details of ADS systems in use by specific agencies and the ability to work closely with representatives from across agencies using ADS.  We urge that task force members be given immediate access to specific, agency-level automated decision-making systems currently in use, as well as to the leadership in those departments, and others with insight into the design and use of these systems.

Social context is essential to defining fair and just outcomes.[1] The city is understood to be using ADS in such diverse contexts as housing, education, child services, and criminal justice. The very idea of a fair or just outcome is impossible to define or debate without reference to the social context. Understanding the different value tradeoffs in decisions about pretrial risk assessments tells you nothing whatsoever about school choice. What is fair, just, or accountable in public housing policy says nothing about what is fair, just, and accountable in child services. This ability to address technological systems within the social context where they are used is what makes the ADS Task Force so important, and potentially so powerful in defining real accountability measures.

The legislative mandate itself also demonstrates why the Task Force requires access to agency technologies. Under the enacting law, the purpose of the Task Force is to make recommendations particular to the City’s agencies.[2]  Specifically, the Task Force must make recommendations for procedures by which explanations of the decisions can be requested, biases can be detected, harms from biases can be redressed, the public can assess the ADS, and the systems and data can be archived.[3] Each of these recommendations apply not to automated decision systems generally, but to “agency automated decision systems,” a term defined separately in the test of the law.[4] Importantly, the law also mandates that the Task Force makes recommendations about “[c]riteria for identifying which agency automated decision systems” should be subject to these procedures.[5]  Thus, the legislative mandate makes clear that for the Task Force to do its work, it will require access to the technologies that city agencies currently use or plan to use, as well as the people in charge of their operation. Lacking this level of detail on actual agency-level use of automated decision-making systems, the recommendations can only be generic. Such generic recommendations will be ineffective because they will not be informative enough for the city to act on.

If the city wanted to find generic guidelines or recommendations for ADSs, it could have looked to existing scholarship on these issues instead of forming a Task Force. Indeed, there is an entire interdisciplinary field of scholarship that has emerged in the last several years, dedicated to the issues of Fairness, Accountability and Transparency (FAT*) in automated systems.[6] This field has made significant strides in coming up with mathematical definitions for fairness that computers can parse, and creating myriad potential methods for bias reduction in automated systems.

But the academic work has fundamental limitations. Much of the research is, by necessity or due to limited access, based on small hypothetical scenarios—toy problems—rather than real-world applications of machine learning technology.[7] This work is accomplished, as is characteristic of theoretical modeling, by stating assumptions about the world and datasets that are being used. In order to translate these solutions to the real world, researchers would have to know whether the datasets and other assumptions match the real-world scenarios.

Using information from city agencies, the task force has the ability to advance beyond the academic focus on toy problems devoid of social context and assess particular issues for systems used in practice. Without information about the systems in use, the Task Force’s recommendations will be limited to procedures at the greatest level of generality—things we already would guess, such as testing the system for bias or keeping it less complex so as to be explainable. But with information about these systems, the Task Force can examine the particular challenges and tradeoffs at issue. With community input and guidance, they can assess the appropriateness of different definitions of bias in a given context, and debate trade-offs between accuracy and explainability given specific social environments.  The recommendations of the Task Force will only be useful if they are concrete and actionable, and that can only be achieved if they are allowed to examine the way ADS operate in practice with a view into *both* the technical and the social systems informing outcomes.

Second, we urge the Task Force to prioritize public engagement. Because social context is essential to defining fair and just outcomes, meaningful engagement with community stakeholders is fundamental to this process. Once the Task Force has access to detailed information about ADS systems in use, public listening sessions must be held to understand community experiences and concerns with the goal of using that feedback to shape the Task Force’s process going forward. Iteration and reviewing of recommendations with community stakeholders as the Task Force moves this work forward will be important to arriving at truly transparent, accountable and just outcomes.

We are here today because we continue to believe the Task Force has great potential. We strongly believe that the Task Force’s work needs to be undertaken thoughtfully and contextually, centering on cooperation, transparency, and public engagement.  The Task Force’s goal needs to be offering actionable and concrete recommendations on the use of ADS in New York City government. We hope that the above testimony provides useful suggestions to move toward that goal.

Thank you.


[1] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), 59.

[2] See Local Law No. 49 of 2018, Council Int. No. 1696-A of 2017 [hereinafter Local Law 49] (repeatedly referring to “agency automated decision systems”).

[3] Id. §§ 3(b)–(f)

[4] Id. § 1(a).

[5] Id. § 3(a) (emphasis added).

[6] ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), https://fatconference.org/

[7] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), at 59.


Explainer: Workplace Monitoring & Surveillance

explainer | 02.06.19

Alexandra Mateescu, Aiha Nguyen

cover for report with yellow background and black typeTechnology enables employers to increasingly monitor their employees. This explainer by Alexandra Mateescu and Aiha Nguyen identifies four current trends in workplace monitoring and surveillance: prediction and flagging tools; biometrics and health data; remote monitoring and time-tracking; and gamification and algorithmic management.

Mateescu and Nguyen consider how each trend impacts workers and workplace dynamics. For instance, freelancers on Upwork can be tracked through their keystrokes, mouse clicks, and screenshots to measure work time for clients. Work that cannot be measured in this way (for example, group brainstorming or long-term planning) may be devalued or go uncompensated.

The authors observe that information asymmetries are deepening as the boundaries of workplace privacy are changing. Tracking metrics like health data, for instance, can make way for discrimination and raises concerns about consent. The type of data employers collect will determine which work is valued, how they evaluate performance, and how workers are classified and compensated.


This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Algorithmic Management in the Workplace. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.


Explainer: Algorithmic Management in the Workplace

explainer | 02.06.19

Alexandra Mateescu, Aiha Nguyen

This explainer by Alexandra Mateescu and Aiha Nguyen defines algorithmic management and reviews how this concept challenges workers’ rights in sectors, including retail, the service industry, and delivery and logistics. The authors outline existing research on the ways that algorithmic management is manifesting across various labor industries, shifting workplace power dynamics, and putting workers at a disadvantage. It can enable increased surveillance and control while removing transparency.

Defined as “a diverse set of technology tools and techniques that structure the conditions of work and remotely manage workforces,” algorithmic management relies on data collection and worker surveillance to enable automated decision-making in real time. For example, an algorithm might decide and assign servers’ shifts.

Because companies aren’t “directly” managing their workers, algorithmic management makes it easier to classify workers as independent contractors, thus relieving companies of the pressure of providing standard worker benefits. Algorithmic management can provide avenues for bias and discrimination, while making it difficult to hold companies accountable. Companies ultimately benefit and continue to scale operations while cutting costs and labor.


This explainer from Data & Society provides a basic introductory overview to concepts and current issues around technology’s impact on the workplace. It is being co-released with an explainer on Workplace Monitoring & Surveillance. For more coverage of emerging issues in labor and technology, visit Social Instabilities in Labor Futures.


AI in Context

report | 01.30.19

Alexandra Mateescu, Madeleine Clare Elish

“AI in Context” shows how automated and AI technologies are reconfiguring work in small family-owned farms and grocery stores.

 


Fairness and Abstraction in Sociotechnical Systems

ACM Conference on Fairness, Accountability, and Transparency (FAT*) | 12.05.18

Andrew D. Selbst, danah boyd, Sorelle Friedler, Suresh Venkatasubramanian, Janet Vertesi

In this paper, authors identify the challenges to integrating fairness into machine learning based systems and suggest next steps.

“In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five “traps” that fair-ML work can fall into even as it attempts to be more context-aware in comparison to traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.”


Don’t Believe Every AI You See

New America | 11.26.18

Madeleine Clare Elish, danah boyd

Data & Society Founder and President danah boyd and Researcher Madeleine Clare Elish lay the groundwork for questioning AI and ethics.

“Without comprehensively accounting for the strengths and weaknesses of technical practices, the work of ethics—which includes weighing the risks and benefits and potential consequences of an AI system—will be incomplete.”


Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment

Criminal Justice and Behavior | 11.23.18

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook, Julie Ciccolini

Data & Society Fellow Cynthia Conti-Cook and co-authors assess the bias involved in risk assessment tools.

“In the top layer, we identify challenges to fairness within the risk-assessment models themselves. We explain types of statistical fairness and the tradeoffs between them. The second layer covers biases embedded in data. Using data from a racially biased criminal justice system can lead to unmeasurable biases in both risk scores and outcome measures. The final layer engages conceptual problems with risk models: Is it fair to make criminal justice decisions about individuals based on groups?”


Data & Society 2015-2016 Fellow Wilneida Negron connects her past social work to her current work as a political scientist and technologist.

“We are at the cusp of a new wave of technological thinking, one defined by a new mantra that is the opposite of Zuckerberg’s: ‘Move carefully and purposely, and embrace complexity.’ As part of this wave, a new, inclusive, and intersectional generation of people are using technology for the public interest. This new wave will help us prepare for a future where technical expertise coexists with empathy, humility, and perseverance.”


As AI becomes integrated into different facets of our lives, Data & Society Researcher Kadija Ferryman joins Robert A. Winn in considering what this means for the field of health.

“How can we bring together the excitement for the possibilities of AI in medicine with the sobering reality of stubborn health disparities that remain despite technological advances?”


In Content or Context Moderation? by Robyn Caplan illustrates the organizational contexts of three types of content moderation strategies by drawing from interviews with 10 major digital platforms.


Data Craft

report | 11.05.18

Amelia Acker

Data Craft analyzes how bad actors manipulate metadata to create effective disinformation campaigns and provides tips for researchers and technology companies trying to spot this “data craft.”


Data & Society Health + Data Lead Mary Madden considers what patient privacy means in the current age of technology.

“In the era of data-driven medicine, systems for handling data need to avoid anything that feels like manipulation—whether it’s subtle or overt. At a minimum, the process of obtaining consent should be separated from the process of obtaining care.”


Weaponizing the Digital Influence Machine

report | 10.17.18

Anthony Nadler, Matthew Crain, and Joan Donovan

Weaponizing the Digital Influence Machine: The Political Perils of Online Ad Tech identifies the technologies, conditions, and tactics that enable today’s digital advertising infrastructure to be weaponized by political and anti-democratic actors.


When Your Boss Is an Algorithm

The New York Times | 10.12.18

Alex Rosenblat

Building off the research for her book Uberland: How Algorithms are Rewriting the Rules of Work, Data & Society Researcher Alex Rosenblat explains algorithmic management in the gig-economy.

“Data and algorithms are presented as objective, neutral, even benevolent: Algorithms gave us super-convenient food delivery services and personalized movie recommendations. But Uber and other ride-hailing apps have taken the way Silicon Valley uses algorithms and applied it to work, and that’s not always a good thing.”


How are AI technologies being integrated in health care? What are the broader implications of this integration? Data & Society Researcher Madeleine Clare Elish investigates.

“This paper examines the development of a machine learning-driven sepsis risk detection tool in a hospital Emergency Department in order to interrogate the contingent and deeply contextual ways in which AI technologies are likely be adopted in healthcare.”


In Governing Artificial Intelligence: Upholding Human Rights & Dignity, Mark Latonero shows how human rights can serve as a “North Star” to guide the development and governance of artificial intelligence.

The report draws the connections between AI and human rights; reframes recent AI-related controversies through a human rights lens; and reviews current stakeholder efforts at the intersection of AI and human rights.


Alternative Influence

report | 09.18.18

Rebecca Lewis

Alternative Influence: Broadcasting the Reactionary Right on YouTube presents data from approximately 65 political influencers across 81 channels to identify the “Alternative Influence Network (AIN)”; an alternative media system that adopts the techniques of brand influencers to build audiences and “sell” them political ideology.


On September 13th, Data & Society Founder and President danah boyd gave the keynote speech at the Online News Association Conference. Read the transcript of her talk on Points.

“Now, more than ever, we need a press driven by ideals determined to amplify what is most important for enabling an informed citizenry.”


Technology’s Impact on Infrastructure is a Health Concern

points | 09.10.18

Mikaela Pitcan, Alex Rosenblat, Mary Madden, Kadija Ferryman

Increasingly, technology’s impact on infrastructure is becoming a health concern. In this Points piece, Data & Society Researchers Mikaela Pitcan, Alex Rosenblat, Mary Madden, and Kadija Ferryman tease out why this intersection warrants further research.

“However, there is an urgent need to understand the interdependencies between technology, infrastructure and health, and how these relationships affect Americans’ ability to live the healthiest lives possible. How can we support design, decision-making, and governance of our infrastructures in order to ensure more equitable health outcomes for all Americans?”


Subscribe to the Data & Society newsletter

Twitter |  Facebook  |  Medium  | RSS

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.  |  Privacy policy