Read our related explainer, AI Governance Needs Sociotechnical Expertise, which explains why the humanities and social sciences are critical to government efforts.
Despite its increasing adoption in AI governance and industry circles — or perhaps because of it — the term “sociotechnical” may be among the most misunderstood in AI policy. Drawing on established literature and historical and present-day examples, this brief explains what a sociotechnical perspective is and why it matters in policy.
Generally, a sociotechnical perspective means viewing society and technology together as one coherent system. In other words, it is not possible to understand the “social” without the “technical,” nor the “technical” without the “social.” Explaining the outcomes of any technology requires focusing on the messier “middle ground” between these two realms.
A sociotechnical approach recognizes that a technology’s real-world safety and performance is always a product of technical design and broader societal forces, including organizational bureaucracy, human labor, social conventions, and power. As this brief illustrates, policymakers’ approach to observe and understand AI — and their tools to regulate it — must be just as expansive.
Our corresponding one-pager, “Answering Three Sociotechnical Questions,” offers a starting point for policymakers to incorporate a sociotechnical approach to AI governance.