Trust, it seems, is in decline: In the United States, trust in religious institutions, Congress, banks, and the media — to name just a few examples — has been decreasing at least since the General Social Survey began measuring Americans’ confidence in these institutions in the 1970s. Across the world, research shows that levels of trust in government processes vary widely according to context and political formation. And trust in the technology industry, oversold to us thanks to the prestige of quantification, has experienced a precipitous downturn over the last several years, even compared to other government, nonprofit, and commercial institutions.
Perhaps because of this, there is a growing focus on building trust in media, in government, and in AI systems. When it comes to data-centric technologies, this raises important questions, including: Can trust be built into systems that users have determined to be untrustworthy? Should we be thinking of trust as something that is declining or improving, something to be built into AI and other data-centric systems, or as something that is produced through a set of relations and in particular locations? Where else, besides large institutions and their technologies, is trust located? How do other frames of trust produce community-centered politics such as politics of refusal or data sovereignty? What can community-based expertise tell us about how trust is built, negotiated, and transformed within and to the side of large-scale systems? Is there a disconnect between the solutions to a broad lack of trust and how social theorists, community members, and cultural critics have thought about trust?
Trust is deeply relational (Scheman 2020, Knudsen et al, 2021, Baier 1986), and has been understood in terms of the vulnerabilities inherent in relationships (Mayer et al 1995). Yet discussions about trust in AI systems often reveal a lack of understanding of the communities whose lives they touch — their particular vulnerabilities, and the power imbalances that further entrench them. Some populations are expected to simply put their trust in large AI systems. Yet those systems only need to prove themselves useful to the institutions deploying them, not trustworthy to the people enmeshed in their decisions (Angwin et. al 2016, O’Neill 2018; Ostherr et. al 2017). At the same time, researchers often stop upon asking whether we can trust algorithms, instead of extending the question of trust to the institutions feeding data into or deploying these algorithms.
This workshop will examine alternative formulations of trust, data, and algorithmic systems by widening the frame on the contexts, approaches, and communities implicated in them.