Academic Workshop

The Social Life of Algorithmic Harms

Call for Applications

Workshop Dates:
March 10 and March 11, 2022
The application portal is now closed.

The AI on the Ground Initiative invites applications for Data & Society’s academic workshop, The Social Life of Algorithmic Harms. Our fundamental workshop question is: 

While the ways algorithmic systems interact with and inflect social life are theoretically boundless in their local contexts and trajectories, how can the harms they produce be practically organized in order to engage them within regulatory, political, and judicial contexts and the development lifecycle? 

Together, we’ll help widen research frames and identify new categories of algorithmic harms, building the field to reflect how these are social harms (not bounded by the parameters of the technical system) and travel through social systems (e.g., judicial decisions, policy recommendations, interpersonal lived experience, etc.).

This program is currently being planned with opportunities to participate in person in New York City, online, or a hybrid of both options on Thursday, March 10 and Friday, March 11, 2022. Data & Society’s AI on the Ground Postdoctoral Scholar Emanuel Moss and Director Jacob Metcalf invite applications from Authors to workshop their academic papers, chapters, data mappings, or other outputs, and from Participants to prepare interdisciplinary feedback on the selected works-in-progress. 

Data & Society academic workshops enable deep dives with a broad community of interdisciplinary researchers into topics at the core of Data & Society’s concerns. They’re designed to maximize scholarly thinking about the evolving and socially important issues surrounding data-driven technologies, and to build connections across fields of interest.

Participation is limited; apply here by December 3, 2021 at 11:59 p.m. EST.

About the Workshop Theme

This workshop is focused on “the social life of algorithmic harms.” It takes as its premise the idea that we only know a small proportion of the ways algorithmic systems can negatively impact the world, individuals, communities, and ecosystems. In the AI ethics and fairness literatures, a narrow range of these harms have received the majority of attention, in part because these harms are constructed as amenable to measurement and addressable through technological solutions. That is, much of what we currently know about algorithmic harms are expressed in the terms of the systems, rather than in the terms of those who are harmed. Constructing harms in this way, as quantifiable, once again centers those actors—engineers—who may perceive and rectify that harm, ignoring the lived experience of those impacted by such systems.  Additionally, these harms are often described from a narrow socio-political perspective that misses the global diversity of modalities in which algorithmic harms manifest. Much work remains if algorithmic systems are to be assessed in ways that prevent the development of the most harmful systems, that minimize or mitigate the harms of systems that are being developed and integrated into society, that center those closest to harm, and that promote the public interest throughout the lifecycle of such systems. 

This workshop solicits research that contributes to our understanding of algorithmic harms as social. We are interested in work that centers lived experience, brings new disciplinary perspectives to bear on the identification and study of algorithmic harms, develops novel methodological approaches to describing and/or measuring the scope of such harms, and that expands the scope of our understanding of who and what may be harmed by the operation of algorithmic systems. This workshop is especially focused on work that suggests ways of addressing algorithmic harms that do not fit into categories already well-elaborated within the AI ethics and algorithmic fairness literatures. For relevant examples of work that has inspired this workshop, or begun to grapple with the social life of algorithmic harm, see a selected bibliography provided below.

We envision the following three main themes (which may shift after reviewing applications):

  1. Mapping (new harms + implications)
  2. Methods (measurements tracking + reporting) 
  3. Implementations (regulation/law/policy)

 

Relevant project topics for this workshop might include:

  • Empirical studies (quantitative and/or qualitative) of harms under-represented within the algorithmic accountability literature.
  • Theoretical studies framing algorithmic harms through ethnographic, political scientific, sociological, or other disciplinary perspectives.
  • Ethnographic studies of the lived experience encountering algorithmic harms.
  • Techniques for anticipating prospective or novel harms.
  • Case studies that illustrate or otherwise shed new light on ways of understanding and conceptualizing algorithmic harm as an analytic category.
  • Case studies that illustrate socio-technical approaches to ameliorating or otherwise mitigating algorithmic harms within the development lifecycle.
  • Methodological approaches to documenting and evaluating novel algorithmic harms, both quantitative and qualitative.
  • Work that comprehensively reviews and/or taxonomizes the breadth of algorithmic harms. 
  • Policy or legal studies that describe how taxonomies of algorithmic harms may be used in legislative and regulatory purposes. 
  • Participatory approaches to understanding algorithmic harms, including methodological interventions, case studies, and synthetic or theoretical work.

We encourage all attendees to approach the Data & Society workshop series as an opportunity to engage across specialties, and to strengthen both relationships and research through participation. While we recognize the value for individual projects, we also see this as a valuable field-building exercise for all involved.

Who We are Looking For

When we select Authors, we will ask:

  • Does the project contribute original or under-recognized narratives about algorithmic impact? 
  • Does the project attend to how difference (race, gender, age, class, caste, location, etc.) plays a role in harms? Does it make global connections? Does it incorporate elements of citational justice (see here) or otherwise draw upon a diverse range of relevant prior work?
  • Does the project incorporate how the design and operation of algorithmic systems has differentiated consequences?
  • Can the work-in-progress and/or the career of the applicant benefit from feedback and connections made during the workshop?

For both Authors and Participants: 

  • Does the participation roster represent diverse perspectives, disciplines, and areas of expertise drawn from across academia, industry, government, advocacy, and civil society?
  • Is the applicant well-poised to participate in a conversation that posits a new theoretical vision of and research agenda for algorithmic harms?
  • Do proposed Authors and Participants bring a broad range of disciplinary interests, academic development, and social positionings? Do they demonstrate investment in meaningful feedback, opportunities for collaborations, and citational recognition? 

Format

We are currently planning the event to take place on Thursday, March 10, 2022 from 3:30-8pm ET and Friday March 11, 2022 from 9am-3pm ET (exact timing to be confirmed) both onsite in NYC and remotely if deemed safe to do so. All eligible participants will receive a $150 stipend, and additional limited travel support is available upon request. Unlike a conference, this workshop focuses on reading, imagining, and offering interdisciplinary responses to in-progress projects, and building collaborative networks for exploring interwoven themes.

Authors: this is a fantastic venue for workshopping a project. If you have an appropriate in-progress paper or other type of work (chapter, data mapping, etc.), you are strongly encouraged to submit a project summary for consideration. Drafts of journal articles, conference papers, law review papers, and book chapters are all welcome. Projects are expected to be around ~75% completed drafts with room for improvement (~10K words max); the goal of this event is not to present finished research but to truly workshop works-in-progress. Rather than presenting complete projects, authors will listen to, and engage with, critical discussion from the assembled group about the idea, with the explicit intent of making the project stronger and more interdisciplinary. Note that Authors are expected to read and provide feedback for up to 2 other project sessions during other portions of the day, in addition to receiving comments on their own work. 

Participants: If you do not wish to submit a project-in-progress but are interested in the topic, we welcome your application as a Participant. All workshop Participants will be asked to review up to 3 projects in advance of the event and to prepare comments for intensive discussion. Some Participants will be invited to be Discussants, and lead the conversation to engage the group in feedback.

The workshop format will include a mix of talks, provocations, deep-dive discussions, networking opportunities, and up to 3 slots focused on workshopping works-in-progress. Each feedback session will be 75 minutes long. Multiple sessions will run in parallel so there will be a total of 9-15 feedback sessions, but each participant will only be responsible for attending 3. Within each group, a Discussant will open with an introduction to the featured project before inviting Participants to share responses and suggestions. 

All attendees will also have the opportunity for informal networking and thematic conversations throughout the day.

How to Apply

If you are interested in attending this workshop, you may either 1) propose research to be workshopped (Author); or 2) describe how your expertise and experience makes you a relevant participant (Participant). If you select “Either,” please submit your project summary only: you will be considered for participation even if your project isn’t selected for the workshop.

Please note: Authors of co-written projects may apply using the same project summary, but each must apply separately. Because of capacity limitations, we ask that no more than three co-authors apply for participation around the same project.

By December 3rd at 11:59pm ET, please submit the following information here:

  • First and last name, affiliation, role, link to bio or work, discipline (key words/subjects), career stage, sector, location, contact email, and pronouns [optional].
  • Type of application.
  • If applying as an Author or Either, in 500 words or less, tell us about your project. How does it surface something new about algorithmic harms? Give us a sense of what stage of development your project is at, any planned methodologies and formats and how this workshop can be helpful. We expect many academic research projects, but also welcome work in alternative formats that provides new insight on these impacts. (OR)
  • If applying as a Participant, in 250 words or less, tell us why you want to be a part of this conversation. What would make you a great contributor to shaping in-progress projects about algorithmic harms? We welcome researchers, practitioners, activists, policymakers, graduate scholars, and others who are able to offer new perspectives on scholarships. 
  • Confirm you can meet event deadlines if selected.
  • Link to 1 project or writing (yours or others) that everyone interested in this domain should know about. [Optional]

Key Dates

By 11:59 p.m. ET:

Application Deadline

Fri, December 3, 2021

Selection Notifications + Format Update

Wed, December 8, 2021

Revised Summary + RSVP Deadline

Wed, January 12, 2022

Draft Paper Deadline

Mon, January 31, 2022

Group Assignments

Fri, February 18, 2022

Workshop

Thu-Fri, March 10-11, 2022

Questions? Contact [email protected]

Bibliography

In order to illustrate the range of perspectives we are interested to have participate in this workshop, we offer this brief bibliography of work that has informed our theme:

Acemoglu, Daron. 2021. “Harms of AI.” Working Paper 29247. Cambridge, MA: National Bureau of Economic Research.

Benjamin, Ruha. 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Medford, MA: Polity Press.

Smith, P., Smith, L. Artificial intelligence and disability: too much promise, yet too little substance?. AI Ethics 1, 81–86 (2021). https://doi.org/10.1007/s43681-020-00004-5

Costanza-Chock, Sasha. 2020. “Design Justice.” Cambridge, MA: MIT Press.

Ehsan, Upol & Riedl, Mark. 2020. “Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach.”

Hoffmann, Anna Lauren. 2018. “Data Violence and How Bad Engineering Choices Can Damage Society.” Medium (blog). April 30, 2018.

———. 2019. “Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse.” Information, Communication & Society 22 (7): 900–915.https://doi.org/10.1080/1369118X.2019.1573912.

Katell, Michael, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Binz, Daniella Raz, and P M Krafft. 2020. “Toward Situated Interventions for Algorithmic Equity: Lessons from the Field,” 11.

Metcalf, Jacob, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. “Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts,” 12.

Mohamed, S., Png, M.-T. & Isaac, W. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philos. Technol. 33, 659–684 (2020). https://link.springer.com/article/10.1007/s13347-020-00405-8

Moss, Emanuel, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. 2021. “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” Data & Society Research Institute.https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Raji, Inioluwa Deborah, Timnit Gebru, Margaret Mitchell, Joy Buolamwini, Joonseok Lee, and Emily Denton. 2020. “Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing.” ArXiv:2001.00964 [Cs], January.http://arxiv.org/abs/2001.00964.

Slaughter, Rebecca Kelley, Janice Kopec, and Mohamed Batal. 2021. “Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission.” New Haven, CT: Information Law Project at Yale University.

Sloane, Mona. 2019. “Inequality Is the Name of the Game: Thoughts on the Emerging Field of Technology, Ethics and Social Justice.” Weizenbaum Conference.https://doi.org/10.34669/WI.CP/2.9.

Smuha, Nathalie A. 2021. “Beyond the Individual: Governing AI’s Societal Harm.” Internet Policy Review 10 (3).https://doi.org/10.14763/2021.3.1574.

Solove, Daniel J., and Danielle Keats Citron. 2016. “Risk and Anxiety: A Theory of Data Breach Harms.” SSRN Electronic Journal.https://doi.org/10.2139/ssrn.2885638.

Steed, Ryan, and Aylin Caliskan. 2021. “Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March, 701–13.https://doi.org/10.1145/3442188.3445932.

Tufekci, Zeynep. 2015. “Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency.” Colorado Technology Law Journal 13 (203).

Credits & Acknowledgments

This workshop is partially supported by the National Science Foundation, Award #1704425.

This workshop is the result of ongoing collaboration between Emanuel Moss and Jacob Metcalf.

Designing this workshop included brainstorming sessions and feedback on initial drafts of this call with Jenna Burrell, Sareeta Amrute, Ranjit Singh, Elizabeth Anne Watkins, Emnet Tafesse, Robyn Caplan, Siera Dissmore, and Ania Calderon. Additional Data & Society teams have contributed their expertise to this call, especially CJ Brody Landow, Chris Redwood, Veronica Eghdami, Joanna Gould, and Sam Hinds.