featured


filtered by: automation


Data & Society Research Analyst Melanie Penagos summarizes three blogposts that came as a result of Data & Society’s AI & Human Rights Workshop in April 2018.

“Following Data & Society’s AI & Human Rights Workshop in April, several participants continued to reflect on the convening and comment on the key issues that were discussed. The following is a summary of articles written by workshop attendees Bendert Zevenbergen, Elizabeth Eagen, and Aubra Anthony.”


How will the introduction of AI into the field of medicine affect the doctor-patient relationship? Data & Society Fellow Claudia Haupt identifies some legal questions we should be asking.

“I contend that AI will not entirely replace human doctors (for now) due to unresolved issues in transposing diagnostics to a non-human context, including both limits on the technical capability of existing AI and open questions regarding legal frameworks such as professional duty and informed consent.”


The rollout of Electronic Visit Verification (EVV) for Medicaid recipients has serious privacy implications, argues Data & Society Researcher Jacob Metcalf.

“So why should we be worried about rules that require caregivers to provide an electronic verification of the labor provided to clients? Because without careful controls and ethical design thinking, surveillance of caregiver labor is also functionally surveillance of care recipients, especially when family members are employed as caregivers.”


points | 01.17.18

Don’t Call AI “Magic”

Madeleine Clare Elish

Artificial intelligence is being increasingly used across multiple sectors and people often refer to its function as “magic.” In this blogpost, D&S researcher Madeleine Clare Elish points out how there’s nothing magical about AI and reminds us that the human labor involved in making AI systems work is offered rendered invisible.

“From one perspective, this makes sense: Working like magic implies impressive and seamless functionality and the means by which the effect was achieved is hidden from view or even irrelevant. Yet, from another perspective, implying something works like magic focuses attention on the end result, denying an accounting of the means by which that end result was reached.”


D&S Researcher Alex Rosenblat was interviewed about Uber for Klint Finley’s article in Wired

Tuesday’s agreement may not be the end of Uber’s problems with the FTC either. Hartzog says a recent paper by University of Washington law professor Ryan Calo and multidisciplinary researcher Alex Rosenblat of the research institute Data & Society points to other potential privacy concerns, such as monitoring how much battery power remains on a user’s device, because users with little juice might be willing to pay more for a ride.

‘When a company can design an environment from scratch, track consumer behavior in that environment, and change the conditions throughout that environment based on what the firm observes, the possibilities to manipulate are legion,’ Calo and Rosenblat write. ‘Companies can reach consumers at their most vulnerable, nudge them into overconsumption, and charge each consumer the most he or she may be willing to pay.’


D&S resident Rebecca Wexler describes the flaws of an increasingly automated criminal justice system

The root of the problem is that automated criminal justice technologies are largely privately owned and sold for profit. The developers tend to view their technologies as trade secrets. As a result, they often refuse to disclose details about how their tools work, even to criminal defendants and their attorneys, even under a protective order, even in the controlled context of a criminal proceeding or parole hearing.


paper | 10.19.16

Discriminating Tastes: Customer Ratings as Vehicles for Bias

Alex Rosenblat, Karen Levy, Solon Barocas, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang and D&S affiliates Solon Barocas and Karen Levy examine how bias may creep into evaluations of Uber drivers through consumer-sourced rating systems:

Through the rating system, consumers can directly assert their preferences and their biases in ways that companies are prohibited from doing on their behalf. The fact that customers may be racist, for example, does not license a company to consciously or even implicitly consider race in its hiring decisions. The problem here is that Uber can cater to racists, for example, without ever having to consider race, and so never engage in behavior that amounts to disparate treatment. In effect, companies may be able to perpetuate bias without being liable for it.”


Nature | 10.13.16

There is a blind spot in AI research

Kate Crawford, Ryan Calo

D&S affiliate Kate Crawford, with Ryan Calo, wrote this piece discussing risks in AI.

Artificial intelligence presents a cultural shift as much as a technical one. This is similar to technological inflection points of the past, such as the introduction of the printing press or the railways. Autonomous systems are changing workplaces, streets and schools. We need to ensure that those changes are beneficial, before they are built further into the infrastructure of every­day life.


D&S researchers Alex Rosenblat and Tim Hwang explore “the significant role of worker motivations and regional political environments on the social and economic outcomes of automation” in this new paper.

Preliminary observations of rideshare drivers and their changing working conditions reveals the significant role of worker motivations and regional political environments on the social and economic outcomes of automation. Technology’s capacity for social change is always combined with non-technological structures of power—legislation, economics, and cultural norms.


D&S affiliate Wilneida Negrón writes five tips to allow for more inclusive AI research.

Although, a step in the right direction, the Partnership on AI does highlight a certain conundrum — what exactly is it that we want from Silicon Valley’s tech giants? Do we want a seat at their table? Or are we asking for a deeper and more sustaining type of participation? Or perhaps, more disturbingly, is it too late for any truly inclusive and meaningful participation in the development of future AI technologies?


paper | 10.02.16

Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI

Sarah Bird, Solon Barocas, Kate Crawford, Fernando Diaz, Hanna Wallach

Sarah Bird, Fernando Diaz, Hanna Wallach, with D&S affiliates Solon Barocas and Kate Crawford, wrote this analysis about implications in autonomous experimentation in AI.

In the field of computer science, large-scale experimentation on users is not new. However, driven by advances in artificial intelligence, novel autonomous systems for experimentation are emerging that raise complex, unanswered questions for the field. Some of these questions are computational, while others relate to the social and ethical implications of these systems. We see these normative questions as urgent because they pertain to critical infrastructure upon which large populations depend, such as transportation and healthcare. Although experimentation on widely used online platforms like Facebook has stoked controversy in recent years, the unique risks posed by autonomous experimentation have not received sufficient attention, even though such techniques are being trialled on a massive scale. In this paper, we identify several questions about the social and ethical implications of autonomous experimentation systems. These questions concern the design of such systems, their effects on users, and their resistance to some common mitigations.

 


book | 09.29.16

An AI Pattern Language

Madeleine Clare Elish, Tim Hwang

D&S researchers Madeleine Clare Elish and Tim Hwang discuss the social challenges of AI in a new collection of essays, An AI Pattern Language.

In A Pattern Language, the central problem is the built environment. While our goal here is not as grand as the city planner, we took inspiration from the values of equity and mutual responsibility, as well as the accessible form, found in A Pattern Language. Like those patterns, this document attempts to develop a common language of problems and potential solutions that appear in different context and at different scales of intervention.

 


paper | 09.16.16

The Wisdom of the Captured

Alex Rosenblat, Tim Hwang

D&S researchers Alex Rosenblat and Tim Hwang analyze how widely captured data of technologies, which enable these technologies to make intelligent decisions, may negatively impact users.

More broadly, how might the power dynamics of user and platform interact with the marketing surrounding these technologies to produce outcomes which are perceived as deceptive or unfair? This provocation paper assembles a set of questions on the capacity for machine learning practices to create undisclosed violations of the expectations of users – expectations often created by the platform itself — when applied to public-facing network services. It draws on examples from consumer-facing services, namely GPS navigation services like Google Maps or Waze, and on the experiences of Uber drivers, in an employment context, to explore user assumptions about personalization in crowd-sourced, networked services.


Medium | 09.11.16

Artificial intelligence is hard to see

Kate Crawford, Meredith Whittaker

D&S affiliate Karen Crawford wrote a compelling piece, sparked by Facebook’s censorship of “The Terror of War” photograph, on the social impacts of artificial intelligence.

The core issue here isn’t that AI is worse than the existing human-led processes that serve to make predictions and assign rankings. Indeed, there’s much hope that AI can be used to provide more objective assessments than humans, reducing bias and leading to better outcomes. The key concern is that AI systems are being integrated into key social institutions, even though their accuracy, and their social and economic effects, have not been rigorously studied or validated.


D&S researcher Madeleine Clare Elish asserts ethnography and anthropology’s role in studying automation and intelligent systems.

Cultural perceptions of the role of humans in automated and robotic systems need to be updated in order to protect against new forms of consumer and worker harms. The symptoms of moral crumple zones (at the risk of mixing metaphors) are some of the phenomena that human factors researchers have been studying for years, such as deskilling, skill atrophy, and impossible cognitive workloads. One of the consequences is that the risks and rewards of technological development do not necessarily develop in the broader public interest. As with previous transitions in the history of automation, new technologies do not so much do away with the human but rather obscure the ways in which human labor and social relations are reconfigured.


Slate | 06.16.16

Letting autopilot off the hook

Madeleine Clare Elish

D&S researcher Madeleine Clare Elish discusses the complexities of error in automated systems. Elish argues that the human role in automated systems has become ‘the weak link, rather than the point of stability’.

We need to demand designers, manufacturers, and regulators pay attention to the reality of the human in the equation. At stake is not only how responsibility may be distributed in any robotic or autonomous system, but also how the value and potential of humans may be allowed to develop in the context of human-machine teams.


In this background primer, D&S Research Analyst Laura Reed and D&S Founder danah boyd situate the current debate around the role of technology in the public sphere within a historical context. They identify and tease out some of the underlying values, biases, and assumptions present in the current debate surrounding the relationship between media and democracy, and connect them to existing scholarship within media history that is working to understand the organizational, institutional, social, political, and economic factors affecting the flow of news and information. They also identify a set of key questions to keep in mind as the conversation around technology and the public sphere evolves.

Algorithms play an increasingly significant role in shaping the digital news and information landscape, and there is growing concern about the potential negative impact that algorithms might have on public discourse. Examples of algorithmic biases and increasingly curated news feeds call into question the degree to which individuals have equal access to the means of producing, disseminating, and accessing information online. At the same time, these debates about the relationship between media, democracy, and publics are not new, and linking those debates to these emerging conversations about algorithms can help clarify the underlying assumptions and expectations. What do we want algorithms to do in an era of personalization? What does a successful algorithm look like? What form does an ideal public sphere take in the digital age? In asking these and other questions, we seek to highlight what’s at stake in the conversation about algorithms and publics moving forward.



D&S Researcher Alex Rosenblat examines how Uber’s app design and deployment redistributes management functions to semiautomated and algorithmic systems, as well as to consumer ratings systems, creating ambiguity around who is in charge and what is expected of workers. Alex also raises questions about Uber’s neutral branding as an intermediary between supply (drivers) and demand (passengers) and considers the employment structures and hierarchies that emerge through its software platform:

Most conversations about the future of work and automation focus on issues of worker displacement. We’re only starting to think about the labor implications in the design of platforms that automate management and coordination of workers. Tools like the rating system, performance targets and policies, algorithmic surge pricing, and insistent messaging and behavioral nudges are part of the “choice architecture” of Uber’s system: it can steer drivers to work at particular places and at particular times while maintaining that its system merely reflects demand to drivers. These automated and algorithmic management tools complicate claims that drivers are independent workers whose employment opportunities are made possible through a neutral, intermediary software platform.

In many ways, automation can obscure the role of management, but as our research illustrates, algorithmic management cannot be conflated with worker autonomy. Uber’s model clearly raises new challenges for companies that aim to produce scalable, standardized services for consumers through the automation of worker-employer relationships.

 

 


In this Working Paper from We Robot 2016, D&S Researcher Madeleine Elish employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems.

Abstract:

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.

As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there’s an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

 


paper | 03.10.16

Hiring by Algorithm

Ifeoma Ajunwa, Sorelle Friedler, Carlos E Scheidegger, Suresh Venkatasubramanian

D&S Fellow Sorelle Friedler and D&S Affiliate Ifeoma Ajunwa argue in this essay that well settled legal doctrines that prohibit discrimination against job applicants on the basis of sex or race dictate an examination of how algorithms are employed in the hiring process with the specific goals of: 1) predicting whether such algorithmic decision-making could generate decisions having a disparate impact on protected classes; and 2) repairing input data in such a way as to prevent disparate impact from algorithmic decision-making.

 

Abstract:

Major advances in machine learning have encouraged corporations to rely on Big Data and algorithmic decision making with the presumption that such decisions are efficient and impartial. In this Essay, we show that protected information that is encoded in seemingly facially neutral data could be predicted with high accuracy by algorithms and employed in the decision-making process, thus resulting in a disparate impact on protected classes. We then demonstrate how it is possible to repair the data so that any algorithm trained on that data would make non-discriminatory decisions. Since this data modification is done before decisions are applied to any individuals, this process can be applied without requiring the reversal of decisions. We make the legal argument that such data modifications should be mandated as an anti-discriminatory measure. And akin to Professor Ayres’ and Professor Gerarda’s Fair Employment Mark, such data repair that is preventative of disparate impact would be certifiable by teams of lawyers working in tandem with software engineers and data scientists. Finally, we anticipate the business necessity defense that such data modifications could degrade the accuracy of algorithmic decision-making. While we find evidence for this trade-off, we also found that on one data set it was possible to modify the data so that despite previous decisions having had a disparate impact under the four-fifths standard, any subsequent decision-making algorithm was necessarily non-discriminatory while retaining essentially the same accuracy. Such an algorithmic “repair” could be used to refute a business necessity defense by showing that algorithms trained on modified data can still make decisions consistent with their previous outcomes.


D&S Researcher Tim Hwang and Samuel Woolley consider the larger trend toward automated politics and the likely future sophistication of automated politics and potential impacts on the public sphere in the era of social media.

Political bots are challenging in part because they are dual-use. Even though many of the bot deployments we see are designed to manipulate social media and suppress discourse, bots aren’t inherently corrosive to the public sphere. There are numerous examples of bots deployed by media organizations, artists, and cultural commentators oriented toward raising awareness and autonomously “radiating” relevant news to the public. For instance, @stopandfrisk tweets information on the every instance of stop-and-frisk in New York City in order to highlight the embattled policy. On the other hand, @staywokebot sends messages related to the Black Lives Matter movement.

This is true of bots in general, even when they aren’t involved in politics. Intelligent systems can be used for all sorts of beneficial things—they can conserve energy and can even save lives—but they can also be used to waste resources and forfeit free speech. Ultimately, the real challenge doesn’t lie in some inherent quality of the technology but the incentives that encourage certain beneficial or harmful uses.

The upshot of this is that we should not simply block or allow all bots—the act of automation alone poses no threat to open discourse online. Instead, the challenge is to design a regime that encourages positive uses while effectively hindering negative uses.


In this commentary, D&S fellow Karen Levy’s considers the gendered dimensions of shifting cultures of work in response to the growing demands of the technologized/mediated workplace. She also explores the impact of new digital surveillance technologies on constructions of masculinity in the male-dominated US long-haul trucking industry.

New workplace technologies are often met with resistance from workers, particularly to the degree that they challenge traditional workplace norms and practices. These conflicts may be all the more acute when a work culture is deeply and historically gendered. In this Commentary, I draw from one such context—long-haul trucking to consider the role a hypermasculine work culture plays in the reception of new digital monitoring technologies.

I base my analysis on ethnographic study of the United States long-haul trucking industry between 2011 and 2014. My research focused on the use of digital fleet management systems to achieve legal and organizational compliance. The research was multi-sited, taking me to eleven states in total, and to many sites of trucking-related work, including large and small firms, trucking conventions, regulatory meetings, inspection stations, and truck stops. Throughout the work, I spoke with and observed a wide variety of industry participants— truckers themselves, of course, but also fleet managers, technology vendors, trucking historians, insurance agents, lawyers, police officers, and many others.


Slate | 02.22.16

Algorithms Can Make Good Co-Workers

Madeleine Clare Elish

D&S Researcher Madeleine Clare Elish considers the possibility of a full-on replacement of humans by robots. She argues that this scenario is nowhere near as close as we have been led to believe. Though algorithms can do an astounding range of things that were once viewed as exclusively human work, they don’t work all by themselves.

This is a crucial but often overlooked point in the debate around algorithms and the future of work: Most human jobs will not be replaced but rather reconfigured in the near future. We absolutely need to worry about the long-term implications on the demand for human labor and how this will affect the economy. But if we only focus on the question of whether and when humans will be replaced, we miss the impact algorithms are already having on work and the opportunities to make choices, as designers and consumers, about how algorithms can disrupt or enforce existing power dynamics in the future.


Quartz | 07.26.15

Override

Gideon Lichfield

Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around labor, this story by Gideon Lichfield titled Override, was published on Quartz in July 2015.


“In a self-driving car, the control of the vehicle is shared between the driver and the car’s software. How the software behaves is in turn controlled — designed — by the software engineers. It’s no longer true to say that the driver is in full control… Nor does it feel right to say that the software designers are entirely control.
“Yet as control becomes distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s a crash, we intuitively — and our laws, in practice — want someone to take the blame.
“The result of this ambiguity is that humans may emerge as ‘liability sponges’ or ‘moral crumple zones.'”

At Data & Society’s Intelligence and Autonomy forum in March 2015, “moral crumple zone” emerged as a useful shared term for the way the “human in the loop” is saddled with liability in the failure of an automated system.

In this essay in Quartz, Madeleine Clare Elish and Tim Hwang explore the problematic named by “moral crumple zone,” with reference to cruise control, self-driving cars, and autopilot.


Fast Company | 06.15.15

Do Not Fear The Robot Apocalypse

Baratunde Thurston

Excerpt: “With all due respect to the boldface AI worriers, do we need to invent a boogeyman from the future when we’ve got the present to worry about? Is tomorrow’s machine enslavement so much more terrifying than today’s vast amounts of child labor, human trafficking, and incarceration? Our current human law enforcement could certainly use some superhuman intelligence to counter the systemic and implicit bias that leads to such disparate levels of arrests, violence, and abuse.”


D&S fellow Tim Hwang participated in a session on “Memory Holes and Security Blankets,” a discussion focusing on the legal and policy consequences of systems that process audio, at Listening Machines Summit.

Tim’s talk discussed the taxonomy of “sleights of machine” as a framework for thinking about regulatory approaches to these technologies.


Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around medicine, a story by Robin Sloan titled The Counselor, was published on VICE’s Motherboard in May 2015 along with this commentary by Tim Hwang and Madeleine Clare Elish.


Motherboard | 05.25.15

The Counselor

Robin Sloan

Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.

The future scenario around medicine, this story by Robin Sloan titled The Counselor, was published on VICE’s Motherboard in May 2015 along with a commentary by Tim Hwang and Madeleine Clare Elish.


Civicist | 05.14.15

Bring on the Bots

Samuel Woolley, Tim Hwang

In this piece for Civic Hall’s Civicist, Samuel Woolley and D&S fellow Tim Hwang argue that “[t]he failure of the ‘good bot’ is a failure of design, not a failure of automation” and urge us not to dismiss the potential benefits of bots.


One week D&S affiliate Elana Zeide is described by the new app Crystal as “a quick learner with strong analytical, creative, and social skills, but may seem scatter-brained, forgetful, and/or sarcastic” and the next week as “pragmatic, independent, and need logical reasons for everything—but [am] able to take a calculated risk when necessary.”

While Zeide it isn’t clear why the app changed its opinion, we are shown why it should be taken with caution and the larger implications that these types of tools can have.


“As policy concerns around intelligent and autonomous systems come to focus increasingly on transparency and usability, the time is ripe for an inquiry into the theater of autonomous systems. When do (and when should) law and policy explicitly regulate the optics of autonomous systems (for instance, requiring electric vehicle engines to ‘rev’ audibly for safety reasons) as opposed to their actual capabilities? What are the benefits and dangers of doing so? What economic and social pressures compel a focus on system theater, and what are the ethical and policy implications of such a focus?”

D&S fellows Karen Levy and Tim Hwang presented a paper on The Presentation of the Machine in Everyday (discussant: Evan Selinger) at WeRobot 2015.


Abstract: What will happen to current regimes of liability when driverless cars become commercially available? What happens when there is no human actor—only a computational agent—responsible for an accident? This white paper addresses these questions by examining the historical emergence and response to autopilot and cruise control. Through an examination of technical, social and legal histories, we observe a counter-intuitive focus on human responsibility even while human action is increasingly replaced by automation. We argue that a potential legal crisis with respect to driverless cars and other autonomous vehicles is unlikely. Despite this, we propose that the debate around liability and autonomous systems be reframed to more precisely reflect the agentive role of designers and engineers and the new and unique kinds of human action attendant to autonomous systems. The advent of commercially available autonomous vehicles, like the driverless car, presents an opportunity to reconfigure regimes of liability that reflect realities of informational asymmetry between designers and consumers. Our paper concludes by offering a set of policy principles to guide future legislation.

“Praise the Machine! Punish the Human! The Contradictory History of Accountability in Automated Aviation” is the first paper in the Intelligence and Autonomy project’s series of Comparative Studies in Intelligent Systems. Intelligence and Autonomy is supported by the John D. and Catherine T. MacArthur Foundation.

(Image from: International Telephone and Telegraph Corporation. May 1957. Advertisement. Broadcasting · Telecasting, 139.)


“Seeta Gangadharan is a Senior Research Fellow at the Open Technology Institute in Washington DC [and a D&S fellow]. She discusses the automated systems, known as algorithms, that are replacing human discretion more and more often. Algorithms are a simple set of mathematical rules embedded in the software to complete a task. They allow google to rank pages according to their relevance and popularity when people conduct an internet search, and allow internet sites like Amazon and Netflix to monitor our purchases and suggest related items. But open technology advocates say there is not enough oversight of these algorithms, which can perpetuating poverty and inequality.”

Kathryn Ryan, The computer algorithms that run our lives, Nine To Noon (Radio New Zealand), 23 February 2015


MIT Technology Review | 11.18.14

Smart Cities Will Take Many Forms

Anthony Townsend

“What does a city taken over by computers—or perhaps smartphones—look like?
“[D&S fellow Anthony Townsend:] ‘A city that’s taken over by computers designed by a big technology company is going to look like a machine. It’s going to be highly automated, highly centralized, and very efficient. It may not be a lot of fun, it may not be terribly respectful of our desire for privacy, it may not be very resilient. On the other hand, we could design cities that have a very decentralized, very redundant kind of infrastructure where the services that we create using sensors and displays and all these digital technologies are trying to achieve objectives that are more in line with increasing social interaction, increasing sustainable behaviors, reinforcing the development of culture, creativity, and wellness. So there are very different possible outcomes. It’s really up to the choices we make.'”

Nate Berg, Smart Cities Will Take Many Forms, MIT Technology Review, November 18, 2014


Self driving cars are no longer in our distant future, they’re here and they’re becoming more independent however, D&S fellow Anthony Townsend and NYU colleague, Greg Lindsay argue that this approach is “looking at the wrong problem.”


primer | 10.08.14

Future of Labor: Understanding Intelligent Systems

Alex Rosenblat, Tamara Kneese, danah boyd

Science fiction has long imagined a workforce reshaped by robots, but the increasingly common instantiation of intelligent systems in business is much more mundane. Beyond the utopian and dystopian hype of increased efficiencies and job displacement, how do we understand what disruptions intelligent systems will have on the workforce?

This document was produced as a part of the Future of Work Project at Data & Society Research Institute. This effort is supported by the Open Society Foundations’ U.S. Programs Future of Work inquiry, which is bringing together a cross-disciplinary and diverse group of thinkers to address some of the biggest questions about how work is transforming and what working will look like 20-30 years from now. The inquiry is exploring how the transformation of work, jobs and income will affect the most vulnerable communities, and what can be done to alter the course of events for the better.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.