Why AI Safety Requires a Sociotechnical Approach: Our Top Ten Reads

Our top ten reads for those interested in understanding why advancing AI safety requires a sociotechnical approach to AI governance.

 

 

July 24, 2024

From the attention to AI safety in legislative proposals to the establishment of national AI Safety Institutes in the US and abroad, current policy efforts have a decided focus on “safe AI.” Yet the call for AI safety has also led to a reverberating rhetoric that frames achieving it as a purely technical process. This ignores the importance of understanding AI as part of a sociotechnical system — considering the social, cultural, and political realities that ultimately influence the efficacy of AI systems and their impact on the public.

Here, we offer a reading list for those interested in understanding why advancing AI safety requires a sociotechnical approach to AI governance, one that has been highlighted in recent prominent policy, research, and academic interventions, including NIST’s AI Risk Management Framework. As these readings illustrate, sociotechnical approaches have a long history of being well integrated into the field of safety engineering.

1. “AI Safety on Whose Terms?”

Science
By Seth Lazar and Alondra Nelson

Despite major investments and initiatives from both tech companies and governments, the prevailing technical agenda for AI safety is insufficient to address critical questions about the accountability needed from companies developing AI systems and the governance of potentially transformative technology. Seth Lazar and Alondra Nelson argue for a sociotechnical approach to AI safety, emphasizing the need to consider broader societal impacts and values among a diversity of stakeholders to mitigate the current and future dangers of AI.

2. Finding and Recommendations: AI Safety

The National Artificial Intelligence Advisory Committee (NAIAC)

Informed by a convening of two panels of experts about the methods needed to achieve AI safety, the NAIAC highlights that AI safety encompasses and requires both technical and sociotechnical approaches. “A narrow view of AI safety,” the NAIAC finds, “will produce an incomplete view of safe AI systems.” Because AI evaluations practices are early in development, further “empirical research is needed to advance the science of AI safety.” NAIAC recommends that the “US AI Safety Institute . . . approach AI safety as an expansive field, addressing (at least) technical model engineering and broader societal concerns” and that the “federal government . . . help to develop the empirical research base needed to advance the science of AI safety, from technical auditing for vulnerabilities to controlled human testing environments.”

3. “Concrete Problems in AI Safety, Revisited”

AI Now Institute
By Deb Raji and Roel Dobbe

Deb Raji and Roel Dobbe examine the challenges of preventing failures in AI systems due to unanticipated behavior. Through real-world case analyses, they argue for an expanded sociotechnical framing to better understand how AI systems and safety mechanisms fail in practice. The study highlights the need to consider broader stakeholder impacts, validate safety problems through inductive reasoning, and address errors in engineering practice to enhance AI safety effectively.

4. “Fairness and Abstraction in Sociotechnical Systems”

SSRN
By Andrew D Selbst, danah boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi

The authors discuss how computer science concepts like abstraction and modular design, which are often used in definitions of fairness, can lead to ineffective and potentially harmful interventions when applied in “societal context that surround decision-making systems.” They identify five key “traps” that even well-intentioned work can fall into: failing to properly consider the broader social context, oversimplifying complex social concepts like fairness, not accounting for how the technology may change existing systems in unintended ways, relying too heavily on mathematical formalisms, and assuming that a technological solution is always appropriate or possible. To address these pitfalls, the authors suggest that designers focus on process rather than solutions, and include social actors within abstraction boundaries.

5. “Building the Epistemic Community of AI Safety”

SSRN
By Shazeda Ahmed, Klaudia Jazwinska, Archana Ahlawat, Amy Winecoff, and Mona Wang

The authors outline the ideas and communities that are creating a specific epistemic culture connecting effective altruism, existential risk, longtermism, and AI safety that has become dominant in the AI regulation space. This coordinated epistemic community uses AI safety research, career advising, online forums, AI forecasting, prize competitions, and media influence to build knowledge and community throughout industry, academia, and policy to create a monopoly on the global discussion about AI.

6. “Sociotechnical Safety Evaluation of Generative AI Systems”

arXiv
By Laura Weidinger, Mariberth Rauh, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Juan Mateos-Garcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, Iason Gabriel, Verena Rieser, and William Isaac

Surveying existing safety evaluations of generative AI and outlining gaps in existing evaluation models, the authors propose a comprehensive sociotechnical risk evaluation framework for generative AI. The framework includes capability evaluations and context evaluations and takes into account the impact of human interaction and systems impacts. The authors propose closing evaluation gaps by suggesting concrete methods for operationalizing risk, choosing evaluation methods, and closing the multimodal evaluation gap.

7. “From Plane Crashes to Algorithmic Harm: Applicability of Safety Engineering Frameworks for Responsible ML”

2023 CHI Conference on Human Factors in Computing Systems
By Shalaleh Rismani, Renee Shelby, Andrew Smart, Edgar Jatho, Joshua Kroll, AJung Moon, and Negar Rostamzadeh

This paper explores the multiple perspectives that are required to develop a system for designing and assessing machine learning processes for safety and harm, illustrating how safety engineering frameworks can be leveraged to investigate social and ethical risks that arise from machine learning systems’ design and use. The authors also provide an overview of how practitioners define, evaluate, and mitigate social and ethical risks, as well as how certain safety engineering frameworks can help augment existing practices.

8. “Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction”

AAAI/ACM Conference on AI, Ethics, and Society
By Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla, Jess Gallegos, Andrew Smart, Emilio Garcia, and Gurleen Virk

The authors highlight the importance of understanding the landscape of potential harms from algorithmic systems to allow practitioners to better predict potential consequences of the systems that they produce, and find that computing research and practitioners lack a high level overview of harms from algorithmic systems. They present a taxonomy of sociotechnical harms to support a “more systematic surfacing of potential harms in algorithmic systems,” and identify multiple themes related to such harms.

9. “How AI Can Be Regulated Like Nuclear Energy”

TIME
By Heidy Khlaaf

Heidy Khlaaf analyzes the analogy drawn between AI existential risk and nuclear energy, criticizing the contradicting claims that future AI presents such a significant risk that regulation is required but that existing harms do not warrant regulation at all. Khlaaf argues that “our inability to prevent today’s AI harms, such as algorithmic discrimination and reducing the cost of disinformation or cybersecurity attacks, only entails that we are ill-prepared to trace and grasp any cascading implications and control of AI risks.”

10. A Sociotechnical Approach to AI Policy by Brian Chen and Jacob Metcalf
AI Governance Needs Sociotechnical Expertise by Serena Oduro and Tamara Kneese

In A Sociotechnical Approach to AI Policy, Data & Society’s Brian Chen and Jacob Metcalf draw on established literature and historical and present-day examples to explain what a sociotechnical perspective is and why it matters in policy. A sociotechnical approach recognizes that a technology’s real-world safety and performance is always a product of technical design and broader societal forces, including organizational bureaucracy, human labor, social conventions, and power. As this brief illustrates, policymakers’ approach to observe and understand AI — and their tools to regulate it — must be just as expansive.

In AI Governance Needs Sociotechnical Expertise, D&S’s Serena Oduro and Tamara Kneese argue that humanities and social science expertise are critical to any efforts to address the sociotechnical nature of AI systems. Sociotechnical research and approaches have proven crucial to AI development and accountability — the key will be implementing AI governance practices that employ the expertise required to reap these benefits. This policy brief outlines some of the ways that doing so can help us to assess the performance and mitigate the harms of AI systems. It concludes with a set of recommendations for incorporating humanities and social science methods and expertise into government efforts, including in hiring and procurement.