AIMLab champions a participatory approach to algorithmic impact assessments (AIAs). A participatory approach to AIAs requires prolonged engagement with communities to emphasize their concerns in the evaluation of systems that may impact them. With a dual commitment to experiment and to standardize, our work at AIMLab provides empirical evidence to close the existing gaps between AI policy and implementation, and between ethical frameworks and actual practices. Our questions are at once theoretical and methodological: What is an impact and how do we go about measuring it? If there are measurable harms attached to a particular technology, how do we then go about mitigating them? Our process involves careful relationship-building and multi-disciplinary translation, as we attempt to reconcile the internal compliance frameworks of companies and organizations with a justice-oriented sociotechnical analysis.
Pilot Projects
- Community red-teaming of a chatbot intended to interface with survivors of gender-based violence.
- Engaging community-based organizations in assessing a city government’s object detection pilot study.
- An analysis of chatbots intended for wellness and care purposes across a range of case studies.
- A sociotechnical analysis of Green AI, drawing on human rights frameworks along with carbon and water cost considerations.
- A community-led impact assessment of data centers’ social and environmental impacts.
- Worker-led data collection and analysis to mitigate algorithmic wage discrimination and generative AI’s labor impacts.
- Participation in a red-teaming exercise focused on election misinformation under the AI Democracy Projects.
Illustration by Jo Zixuan. This image is not licensed under Creative Commons.