On March 24-25, 2015, Data & Society’s Intelligence and Autonomy initiative held its first annual futures forum, which convened scholars, technologists, and stakeholders working across different arenas relevant to the emergence of intelligent systems. The goal of the forum was to identify common policy challenges raised by the widespread implementation of intelligent systems and to surface the linkages between the various contexts in which these technologies are emerging.
To initiate and provoke discussion, I&A commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.
At the forum, “moral crumple zone” emerged as a useful shared term for the way the “human in the loop” is saddled with liability in the failure of an automated system. In a subsequent essay in Quartz, Madeleine Clare Elish and Tim Hwang explore the problematic named by “moral crumple zone,” with reference to cruise control, self-driving cars, and autopilot.