featured


filtered by: intelligent systems


On April 26-27, Data & Society hosted a multidisciplinary workshop on AI and Human Rights. In this Points piece, Data + Human Rights Research Lead Mark Latonero and Research Analyst Melanie Penagos summarize discussions from the day.

“Can the international human rights framework effectively inform, shape, and govern AI research, development, and deployment?”


working paper | 03.02.18

The Intuitive Appeal of Explainable Machines

Andrew Selbst, Solon Barocas

This paper is a response to calls for explainable machines by Data & Society Postdoctoral Scholar Andrew Selbst and Affiliate Solon Barocas.

“We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself.”


points | 01.17.18

Don’t Call AI “Magic”

Madeleine Clare Elish

Artificial intelligence is being increasingly used across multiple sectors and people often refer to its function as “magic.” In this blogpost, D&S researcher Madeleine Clare Elish points out how there’s nothing magical about AI and reminds us that the human labor involved in making AI systems work is offered rendered invisible.

“From one perspective, this makes sense: Working like magic implies impressive and seamless functionality and the means by which the effect was achieved is hidden from view or even irrelevant. Yet, from another perspective, implying something works like magic focuses attention on the end result, denying an accounting of the means by which that end result was reached.”


In late August, D&S Researcher Anne Washington talked real-world implications of AI with the Centre for Public Impact’s Joel Tito.

Who wants to talk about the end of the world? CPI’s Joel Tito and Data & Society’s Anne Washington certainly do – and it’s this discussion point which kicks off our latest podcast.

Washington, a digital government scholar whose work addresses emerging policy needs for data science, tells Joel about what she is most afraid of when it comes to artificial intelligence (AI) and its application to government. She also explains that while AI can make processes more efficient and better streamlined, it shouldn’t be used for “really complicated human decisions”. Find out why here, as well as if she thinks we should seek inspiration from the Romans or ancient Greeks when it comes to AI and government…


D&S advisor Anil Dash discusses Fake Markets that are dominated by few tech companies.

Worse, we’ve lost the ability to discern that a short-term benefit for some users that’s subsidized by an unsustainable investment model will lead to terrible long-term consequences for society. We’re hooked on the temporary infusion of venture capital dollars into vulnerable markets that we know are about to be remade by technological transformation and automation. The only social force empowered to anticipate or prevent these disruptions are policymakers who are often too illiterate to understand how these technologies work, and who too desperately want the halo of appearing to be associated with “high tech”, the secular religion of America.

 


D&S advisor Claudia Perlich discusses modeling, transparency, and machine learning in a new episode of the Partially Derivative podcast.

“One pitfall I see is that it’s easy from a social science perspective to condemn all data science as evil…but that ultimately doesn’t help advance the situation.”


paper | 10.19.16

Discriminating Tastes: Customer Ratings as Vehicles for Bias

Alex Rosenblat, Karen Levy, Solon Barocas, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang and D&S affiliates Solon Barocas and Karen Levy examine how bias may creep into evaluations of Uber drivers through consumer-sourced rating systems:

Through the rating system, consumers can directly assert their preferences and their biases in ways that companies are prohibited from doing on their behalf. The fact that customers may be racist, for example, does not license a company to consciously or even implicitly consider race in its hiring decisions. The problem here is that Uber can cater to racists, for example, without ever having to consider race, and so never engage in behavior that amounts to disparate treatment. In effect, companies may be able to perpetuate bias without being liable for it.”


Nature | 10.13.16

There is a blind spot in AI research

Kate Crawford, Ryan Calo

D&S affiliate Kate Crawford, with Ryan Calo, wrote this piece discussing risks in AI.

Artificial intelligence presents a cultural shift as much as a technical one. This is similar to technological inflection points of the past, such as the introduction of the printing press or the railways. Autonomous systems are changing workplaces, streets and schools. We need to ensure that those changes are beneficial, before they are built further into the infrastructure of every­day life.


D&S affiliate Wilneida Negrón writes five tips to allow for more inclusive AI research.

Although, a step in the right direction, the Partnership on AI does highlight a certain conundrum — what exactly is it that we want from Silicon Valley’s tech giants? Do we want a seat at their table? Or are we asking for a deeper and more sustaining type of participation? Or perhaps, more disturbingly, is it too late for any truly inclusive and meaningful participation in the development of future AI technologies?


paper | 10.02.16

Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI

Sarah Bird, Solon Barocas, Kate Crawford, Fernando Diaz, Hanna Wallach

Sarah Bird, Fernando Diaz, Hanna Wallach, with D&S affiliates Solon Barocas and Kate Crawford, wrote this analysis about implications in autonomous experimentation in AI.

In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

 


book | 09.29.16

An AI Pattern Language

Madeleine Clare Elish, Tim Hwang

D&S researchers Madeleine Clare Elish and Tim Hwang discuss the social challenges of AI in a new collection of essays, An AI Pattern Language.

In A Pattern Language, the central problem is the built environment. While our goal here is not as grand as the city planner, we took inspiration from the values of equity and mutual responsibility, as well as the accessible form, found in A Pattern Language. Like those patterns, this document attempts to develop a common language of problems and potential solutions that appear in different context and at different scales of intervention.

 


paper | 09.16.16

The Wisdom of the Captured

Alex Rosenblat, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang analyze how widely captured data of technologies, which enable these technologies to make intelligent decisions, may negatively impact users.

More broadly, how might the power dynamics of user and platform interact with the marketing surrounding these technologies to produce outcomes which are perceived as deceptive or unfair? This provocation paper assembles a set of questions on the capacity for machine learning practices to create undisclosed violations of the expectations of users – expectations often created by the platform itself — when applied to public-facing network services. It draws on examples from consumer-facing services, namely GPS navigation services like Google Maps or Waze, and on the experiences of Uber drivers, in an employment context, to explore user assumptions about personalization in crowd-sourced, networked services.


Medium | 09.11.16

Artificial intelligence is hard to see

Kate Crawford, Meredith Whittaker

D&S affiliate Karen Crawford wrote a compelling piece, sparked by Facebook’s censorship of “The Terror of War” photograph, on the social impacts of artificial intelligence.

The core issue here isn’t that AI is worse than the existing human-led processes that serve to make predictions and assign rankings. Indeed, there’s much hope that AI can be used to provide more objective assessments than humans, reducing bias and leading to better outcomes. The key concern is that AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated.


D&S researcher Madeleine Clare Elish asserts ethnography and anthropology’s role in studying automation and intelligent systems.

Cultural perceptions of the role of humans in automated and robotic systems need to be updated in order to protect against new forms of consumer and worker harms. The symptoms of moral crumple zones (at the risk of mixing metaphors) are some of the phenomena that human factors researchers have been studying for years, such as deskilling, skill atrophy, and impossible cognitive workloads. One of the consequences is that the risks and rewards of technological development do not necessarily develop in the broader public interest. As with previous transitions in the history of automation, new technologies do not so much do away with the human but rather obscure the ways in which human labor and social relations are reconfigured.


Slate | 06.16.16

Letting autopilot off the hook

Madeleine Clare Elish

D&S researcher Madeleine Clare Elish discusses the complexities of error in automated systems. Elish argues that the human role in automated systems has become ‘the weak link, rather than the point of stability’.

We need to demand designers, manufacturers, and regulators pay attention to the reality of the human in the equation. At stake is not only how responsibility may be distributed in any robotic or autonomous system, but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.



In this Working Paper from We Robot 2016, D&S Researcher Madeleine Elish employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems.

Abstract:

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.

As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there’s an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

 


paper | 03.10.16

Limitless Worker Surveillance

Ifeoma Ajunwa, Kate Crawford, Jason Schultz

D&S Affiliates Ifeoma Ajunwa, Kate Crawford, and Jason Schultz examine the effectiveness of the law as a check on worker surveillance, given recent technological innovations. This law review article focuses on popular trends in worker tracking – productivity apps and worker wellness programs – to argue that current legal constraints are insufficient and may leave American workers at the mercy of 24/7 employer monitoring. They also propose a new comprehensive framework for worker privacy protections that should withstand current and future trends.

Abstract:

From the Pinkerton private detectives of the 1850s, to the closed-circuit cameras and email monitoring of the 1990s, to contemporary apps that quantify the productivity of workers, American employers have increasingly sought to track the activities of their employees. Along with economic and technological limits, the law has always been presumed as a constraint on these surveillance activities. Recently, technological advancements in several fields – data analytics, communications capture, mobile device design, DNA testing, and biometrics – have dramatically expanded capacities for worker surveillance both on and off the job. At the same time, the cost of many forms of surveillance has dropped significantly, while new technologies make the surveillance of workers even more convenient and accessible. This leaves the law as the last meaningful avenue to delineate boundaries for worker surveillance.

 


Quartz | 07.26.15

Override

Gideon Lichfield

Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around labor, this story by Gideon Lichfield titled Override, was published on Quartz in July 2015.


“In a self-driving car, the control of the vehicle is shared between the driver and the car’s software. How the software behaves is in turn controlled — designed — by the software engineers. It’s no longer true to say that the driver is in full control… Nor does it feel right to say that the software designers are entirely control.
“Yet as control becomes distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s a crash, we intuitively — and our laws, in practice — want someone to take the blame.
“The result of this ambiguity is that humans may emerge as ‘liability sponges’ or ‘moral crumple zones.'”

At Data & Society’s Intelligence and Autonomy forum in March 2015, “moral crumple zone” emerged as a useful shared term for the way the “human in the loop” is saddled with liability in the failure of an automated system.

In this essay in Quartz, Madeleine Clare Elish and Tim Hwang explore the problematic named by “moral crumple zone,” with reference to cruise control, self-driving cars, and autopilot.


Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around medicine, a story by Robin Sloan titled The Counselor, was published on VICE’s Motherboard in May 2015 along with this commentary by Tim Hwang and Madeleine Clare Elish.


Motherboard | 05.25.15

The Counselor

Robin Sloan

Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around medicine, this story by Robin Sloan titled The Counselor, was published on VICE’s Motherboard in May 2015 along with a commentary by Tim Hwang and Madeleine Clare Elish.


“As policy concerns around intelligent and autonomous systems come to focus increasingly on transparency and usability, the time is ripe for an inquiry into the theater of autonomous systems. When do (and when should) law and policy explicitly regulate the optics of autonomous systems (for instance, requiring electric vehicle engines to ‘rev’ audibly for safety reasons) as opposed to their actual capabilities? What are the benefits and dangers of doing so? What economic and social pressures compel a focus on system theater, and what are the ethical and policy implications of such a focus?”

D&S fellows Karen Levy and Tim Hwang presented a paper on The Presentation of the Machine in Everyday (discussant: Evan Selinger) at WeRobot 2015.


Abstract: What will happen to current regimes of liability when driverless cars become commercially available? What happens when there is no human actor—only a computational agent—responsible for an accident? This white paper addresses these questions by examining the historical emergence and response to autopilot and cruise control. Through an examination of technical, social and legal histories, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automation. We argue that a potential legal crisis with respect to driverless cars and other autonomous vehicles is unlikely. Despite this, we propose that the debate around liability and autonomous systems be reframed to more precisely reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers. Our paper concludes by offering a set of policy principles to guide future legislation.

“Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation” is the first paper in the Intelligence and Autonomy project’s series of Comparative Studies in Intelligent Systems. Intelligence and Autonomy is supported by the John D. and Catherine T. MacArthur Foundation.

(Image from: International Telephone and Telegraph Corporation. May 1957. Advertisement. Broadcasting · Telecasting, 139.)


primer | 10.08.14

Future of Labor: Understanding Intelligent Systems

Alex Rosenblat, Tamara Kneese, danah boyd

Science fiction has long imagined a workforce reshaped by robots, but the increasingly common instantiation of intelligent systems in business is much more mundane. Beyond the utopian and dystopian hype of increased efficiencies and job displacement, how do we understand what disruptions intelligent systems will have on the workforce?

This document was produced as a part of the Future of Work Project at Data & Society Research Institute. This effort is supported by the Open Society Foundations’ U.S. Programs Future of Work inquiry, which is bringing together a cross-disciplinary and diverse group of thinkers to address some of the biggest questions about how work is transforming and what working will look like 20-30 years from now. The inquiry is exploring how the transformation of work, jobs and income will affect the most vulnerable communities, and what can be done to alter the course of events for the better.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.