How to Think Like a Sociotechnical Researcher

Start by recognizing that every technical system represents a set of choices.

 

October 9, 2024

At Data & Society, we do sociotechnical research. In its most basic terms, sociotechnical research argues that problems of technology are problems of society. 

As a senior researcher with Data & Society’s AI on the Ground team, where I research the impact of algorithmic systems, I approach sociotechnical research with four core assumptions: 

  • First, there is no such thing as a purely technical system. 
  • Second, every technical system is designed with a particular perspective and a vision to transform society — but this transformation does not happen equally for everyone. So, paying attention to the differences in people’s experiences with technical systems — and where they lead —  is crucial for sociotechnical research. 
  • Third, every technical system represents a set of choices — choices that we make when we build it, and choices we make when we use it. Sociotechnical research is focused on the nature of these choices, why we make them, and at whose expense.
  • And finally, fourth: The relationship between technology and society is a two way street; they mutually shape each other.

To see how these assumptions work together, let’s consider an example. Let’s look at the basic building blocks of large language models: transformers. Technically, transformers encode vast amounts of data into layers of internal representations, and then decode these representations to produce a meaningful response to a prompt. 

The novelty and success of transformers lies in their ability to direct attention. During the encoding process, attention helps in figuring out the relationships between these internal representations. During the decoding process, attention helps in identifying what is important in producing a meaningful response. The title of a famous paper that highlights this ability is simply: “Attention is All You Need.”

But in the process of identifying what it should be paying attention to, a transformer is essentially trying to answer: what would we as people pay attention to? Now that is a social and cultural question. What we pay attention to is a matter of our perspective. We can see this in the most ordinary of ways, including the choice of the title of the paper  “Attention Is All You Need,” which is inspired by the Beatles’ song All You Need is Love. And as they put it: There is nothing you can see that isn’t shown.

On one hand, we are quite literally training large language models to answer questions to our liking through reinforcement learning from human feedback. This process is unequal in its very design: it prioritizes some perspectives at the expense of others. On the other hand, we are also paying close attention to what transformers pay attention to in the hopes of learning new things from data. In understanding attention we can see all the core assumptions of sociotechnical research working together.  

But there are also challenges in doing sociotechnical research to study the impact of large language models. AI enthusiasts imagine that large language models will disrupt every profession ranging from medicine and law to coding and journalism. This stems from their belief that a reasonable amount of economically valuable work and professional expertise in the knowledge economy is simply a matter of processing language and managing conversations with clients. From this perspective, a language model should be able to do most of this work if it can just pay attention to what practitioners pay attention to. 

For example: talking to a patient about their symptoms to come up with a diagnosis. From a practitioner’s perspective, what is most crucial is that the model’s outputs are consistent with their standards of professional care. But this is not a trivial problem of AI adoption. It underlies a core tension between practitioners’ expertise and a model’s capabilities. How much time should practitioners put into training a model to do the work that they normally do? At what scale of AI adoption does all of this time become an appropriate return on their investments of efforts? Is it reasonable for us to imagine that in the future people will speak with a chatbot first, before talking to a human practitioner? Each profession will have to work through these questions. They will have to manage the scale at which models can align with their practice and who gets left behind in this transformation. 

Thinking like a sociotechnical researcher requires that we grapple with these choices. It requires us to cut through the arguments about the inevitability of AI. When we start paying attention to how we make choices about AI, we are all thinking like sociotechnical researchers.  

We can ask: What does a given application of AI mean for us as individuals? What does it mean for the communities we identify with? What does it mean for the work we do; for our professions; for our culture; for our places; for our countries; for the world we inhabit; and for the other species that we share our planet with? 

These questions invite us to imagine and live in a society where AI is not a given, but a series of active choices. After all, we are products of the choices we make. 

Adapted from a lightning talk Singh delivered at Data & Society’s tenth anniversary event on September 26, 2024.