Algorithmic Impact Methods Lab
AIMLab works to develop methodologies for conducting empirical, participatory algorithmic impact assessments to support the governance of artificial intelligence.
As companies and governments rapidly introduce AI into everyday products and services, it is critical to first anticipate their impacts. AI predictions, recommendations, and decisions can sometimes be high-stakes, affecting a person’s well-being, health, or access to opportunities for credit, employment, and public services. These benefits and harms are rarely distributed evenly. Research shows that AI systems perform differently across lines of race, gender, class, ability, occupation, and power, differences that are too often overlooked before deployment. Algorithmic impact assessments (AIAs) are designed to bridge this gap; they help identify and evaluate potential impacts in advance, so risks can be mitigated and benefits more equitably distributed.
AIAs are emerging as a pillar of responsible AI deployment and have already been mandated for high-risk systems in several countries, and increasingly in state and local governments. Conventional approaches to AIA rely on self-reported checklists and procedures that demonstrate compliance but do not trace the qualitative experiences of people who would be impacted by AI deployment; if treated as a box-checking exercise, assessments are performative.
Our approach embeds evaluation within the places a system would be used, foregrounding dialogue with affected communities to build accountability relationships that move beyond technical documentation and reporting. Genuine dialogue between developers, policymakers, and the public ensures that AI systems are assessed for their likely real-world consequences.
Our work
The Algorithmic Impact Methods Lab (AIMLab) was launched in May 2023 to explore how algorithmic impact assessments can center the voices of impacted communities. Our goal is to shape emerging best practices as AIAs become increasingly mandated by law and policy, and to show how assessments can be driven by the concerns of those most affected.
Our pilots of community-based AIAs demonstrate that community engagement surfaces flawed assumptions, identifies on-the-ground needs, and helps anticipate failures. Our method for community-based AIA is now available; here you’ll find our toolkit for conducting AIA, documentation of our pilots, and reflections on lessons learned.
Background
This work builds on prior research at Data & Society that analyzed algorithmic impact assessments (AIAs) as sociotechnical instruments rather than neutral tools, highlighting how their effectiveness depends on institutional and political context (Moss et al. 2021; Metcalf et al. 2021). Early critiques from this work underscored the risks of symbolic compliance and institutional capture if AIAs are treated merely as procedural checkboxes. Instead, our early scholarship emphasized that the success of AIAs depends on embedding them within infrastructures that support broad participation and continual self-examination of how decisions are made and justified. These findings called for practices that foreground affected communities and directly grapple with the structural conditions shaping algorithmic harm. It also found that too often, today’s AIA processes are missing this community engagement dimension, and that such two-way communication toward durable relationships between developers or deployers and impacted communities is essential for meaningful accountability.
See also:
– Metcalf, Moss, Watkins, Singh & Elish (2021) – “Algorithmic Impact Assessments and Accountability: The Co‑construction of Impacts.” Provides the conceptual foundation of AIAs, arguing that “impacts” must be co‑constructed with affected communities to ensure meaningful accountability. AIAs aren’t neutral: they are shaped by institutional contexts and can risk being superficial unless grounded in relationships.
– Moss, Watkins, Singh, Elish & Metcalf (2021) – “Assembling Accountability: Algorithmic Impact Assessment for the Public Interest.” A practical report that maps challenges of constructing AIAs by comparing them with impact assessments in domains like environment and human rights; it offers frameworks for embedding diverse expertise in the process.
As pre-deployment algorithmic impact assessments (AIA) become a legal and policy requirement in many jurisdictions, our work set out to support staff with resources to conduct these assessments. While regulatory frameworks for AIA are emerging in different forms, each reflects a shared recognition that systems must be evaluated for potential harms before they are deployed.
This toolkit is the result of 18 months of piloting and experimentation at Data & Society’s Algorithmic Impact Methods Lab (AIMLab). It provides a practical framework for conducting AIA using a strategy anchored in centering community voices.
Our participatory approach asks the developers and deployers of AI systems to connect directly with the people most affected, anticipating harms and surfacing concerns before systems are put in place. The materials are also adaptable for community organizations seeking to evaluate systems from the ground up.
The toolkit walks users through the steps of a participatory AIA: identifying stakeholders, designing engagement, and translating community input into actionable insights. It helps uncover false assumptions in system design, answer community questions and concerns, and surface key risks and blind spots to mitigate before launch. It also supports organizations to better understand local needs and explore alternative solutions.
It is intended for use by people evaluating any AI system before deployment, like IT staff in the public sector, user research teams in tech companies, and community engagement or tech teams in non-profits. It complements other pre-deployment AI assessments, such as budget, system performance, organizational readiness, and environmental impacts.
Below are editable, ready-to-use templates, scripts, consent forms, session plans, slides, email templates, and other resources to guide this process.
This starter guide gives an overview of the algorithmic impact assessment process and how our method engages impacted communities.
This outreach plan helps identify which community-based organizations you should reach out to as part of your impact assessment process.
Use this template to compose informative welcoming emails to community-based organizations identified in the outreach plan.
ENGAGEMENT SESSION TEMPLATES
The following sessions can be co-hosted with partner organizations. Session one provides background on AI, with a choice of a lecture or interactive format. Session two focuses on impact assessment and eliciting community hopes, expectations, and concerns.
Session One
An AI 101 session to help participants gain a shared understanding of AI in order to inform future discussions about the system you are assessing. Run this session with each group identified in the outreach plan. This can also be used standalone. Note: all links go to Google docs or slides.
90 minute lecture + interactive option: AI 101
– Informed consent form also available in Spanish
–AI 101 script also available in Spanish
– AI 101 slides also available in Spanish
45 minute interactive option: ChatGPT Skillshare
– Informed consent form also available in Spanish
– Skillshare script also available in Spanish
– Skillshare slides also available in Spanish
Session Two
A content framework to elicit feedback from impacted community groups on the system being assessed. Run this session with each group identified in the outreach plan; it can be run multiple times for complex projects as they evolve.
– Slides also available in Spanish
– Script also available in Spanish
– Run of show also available in Spanish
– Informed consent form also available in Spanish
AIA TEMPLATE
A fillable template to report on the impact engagement process and the necessary content to create a transparent report. Once you’ve completed the process, we recommend posting your AIA publicly and sharing back with participating communities.
Acknowledgements
Thank you to Micah Epstein for their help on toolkit design. Thank you to Siera Dissmore for helping the toolkit get over the finish line. Thank you to Patrick Davison, Kiara Childs, Eryn Loeb, and Alice Marwick for all of their helpful developmental editing and suggestions. Thanks to Leila Doty, Chelsea Palacio, Serena Oduro, Quinn Anex-Ries, and Madeliene Dwyer for taking the time to review and give comments on our drafts. Thank you to Chris Redwood and Alessa Erawan for our web layout and to Tess Demir for our web design. All errors and omissions are our own.
AIMLab’s approach was developed through iterative field testing with local government, non-profits, industry, and community-based organizations. These pilots helped us refine our approach.
Executive summary
– The first three AIMLab pilots in San José, South Africa, and San Francisco tested participatory methods of algorithmic impact assessment in very different domains: municipal governance, gender-based violence support, and commercial wellness. Taken together, they show the promises and limits of participatory AIA.
– Pilots showed that AIAs surface concrete, actionable insights about risks and harms when impacted communities are directly engaged in the process.
– Our findings confirmed that technical expertise in AI is not required for participants to challenge a system’s premise and anticipate its real-world consequences.
– Across all of our cases, participants surfaced risks that technical audits alone would miss: criminalization of homelessness, retraumatization of survivors, dependency on wellness chatbots. Participants consistently flagged sensitive data practices, surveillance risks, and questioned whether automation was appropriate for the problems at hand.
– AIA findings carry weight when backed by external accountability pressures, such as news media or policy mandates. In the absence of regulatory or institutional commitments to act on identified risks, AIA risks sliding into symbolism rather than responding to anticipated harms.
Case reports from our pilots
Our “Notes from the Field” series offers short reflections drawn from AIMLab’s hands-on work experimenting with algorithmic impact assessments. They capture emerging methods, unexpected challenges, and lessons learned while partnering with communities, governments, and organizations. These informal reflections are meant to take stock of algorithmic impact assessment and share insights into what it takes to support AI accountability on the ground.
The Uses and Limits of Algorithmic Impact Assessments (October 2025)AIMLab’s community-based algorithmic impact assessment of the City of San José’s computer vision pilot program was underway when an article on the front page of The Guardian broke news about the pilot. In this case, investigative journalism proved to be an important tool in effecting a change to the scope of the program, and encouraging the city to take action. It also shed light on important lessons for AIA practice and algorithmic accountability more broadly. In this piece, Meg Young and Tamara Kneese outline lessons learned about how future AIA work can be more effective in catalyzing change.
Field Notes on Algorithmic Impact Assessments (January 2024)As the AIMLab team lays the foundation for its work — seeking to understand the gaps between policy and implementation, and between ethical frameworks and actual practices — their questions are at once theoretical and methodological. In this piece, Tamara Kneese outlines some of the problems and questions framing AIMLab’s research, and reflects on how history of environmental impact assessments might be a model for how researchers and advocates can push for robust, holistic forms of accountability.
The Algorithmic Impact Methods Lab: Methods from the Field (September 2024)The team reflects on learnings from AIMLab’s first year, including the patterns they are noticing across sectors and case studies, and why they have shifted from thinking about this work as about “impact assessment” to thinking about it as a form of impact engagement. They also lay out some of the emergent themes their work will address.
All Work
-
blog post
PointsOur method for conducting community-based algorithmic impact assessments is now available! Explore our extensive toolkit, documentation of our pilots, and a series of reflections on lessons learned. Read on PointsOctober 2025 -
blog post
PointsChatbots are reshaping not only how people seek help, but how they define it, researcher Briana Vecchione writes. Read on PointsAugust 2025 -
Primer
Data & SocietyThis primer explores why public agencies do not typically look to affected people for input on technology design, and explains why technology purchasing will be a focal point for needed change. Read moreJune 2025 -
op-ed
Tech Policy Press“Eliciting impacted communities’ input can support true innovation by directing technology development to the problems identified on the ground — rather than those imposed from the top down,” AIMLab Project Director Meg Young writes. Read on Tech Policy PressFebruary 2025 -
Press Coverage
SF ExaminerDecember 2024 -
blog post
PointsSeptember 2024