Meg Young and Tamara Kneese write about AIMLab’s community-based algorithmic impact assessment of the City of San José’s computer vision pilot program, and what happened when front-page article broke news about the pilot while it was underway.
Meg Young and Tamara Kneese write about AIMLab’s community-based algorithmic impact assessment of the City of San José’s computer vision pilot program, and what happened when front-page article broke news about the pilot while it was underway.
October 29, 2025
In the process of collecting case studies that demonstrate how impacted communities can better drive the terms of algorithmic impact assessments (AIA), we have been taking stock of these assessments more broadly. What do they afford? What are their limitations? What resources are needed to complete them, and who pays? Most importantly, how do organizations planning to deploy a system respond once the evidence produced by AIA is available? One of our case studies produced some insights that have potential to strengthen AIA practice and algorithmic accountability as a field.
Last year, our community-based algorithmic impact assessment of the City of San José’s computer vision pilot program was underway when an article on the front page of the UK-based newspaper The Guardian broke news about the pilot. Todd Feathers reported that in July of 2023, the San José city council had directed city staff to evaluate object detection technology’s ability to automate 72-hour parking enforcement. The city began to test this use case by mounting cameras on a municipal vehicle driving periodically through the city’s District 10. The resulting images were fed into computer vision software and used to train the companies’ algorithms to detect specific objects. Those objects, Feathers reported, included not only things like potholes, obstructions to bike lanes, and illegal dumping, but tents and lived-in vehicles — which could be occupied by people experiencing homelessness. “San José’s foray into automated surveillance of homelessness is the first of its kind in the country,” Feathers wrote. The public response was swift and fierce, with the Oakland-based Tech Equity Collaborative describing the pilot as dystopian.
Anticipating these concerns, partners in the City’s digital privacy office had already been working with Data & Society’s AIMLab on an algorithmic impact assessment process that would enable community-based organizations in San José to respond to the pilot and inform its scope. As part of that work, we had heard concerns from many residents, primarily those working for immigrant rights; the city had reached out directly to organizations representing people experiencing homelessness. This public feedback was meant to inform the findings of the pilot study before the technology was more broadly adopted or deployed.
In our impact assessment process, many residents expressed concern that systems for detecting lived-in vehicles and encampments would be used to penalize people, rather than to help them access housing and support. Participants wanted clarity on the purpose of the technology — would it be used for enforcement, public safety, or connecting people to services? They also expressed concern about structural issues, such as the lack of affordable housing, long waitlists for shelters, and the fact that high rents were contributing to the increasing number of people living in vehicles or RVs, and questioned whether the city was doing enough to address rising rents and prevent displacement. They discussed the challenges faced by the homeless community, including mental health issues and the lack of sufficient resources or support.
The Guardian piece raised many of the same questions and concerns, but it was more successful in raising the public profile of the problem and galvanizing a response from the city. Following the public reception of the article, the city was responsive, and moved away from elements of the pilot that posed the gravest risk to residents in the community experiencing homelessness. Additional coverage was published in SFGate, in which San José staff said they would reassess the pilot project. Demonstrating that the city understood the vehement public concern about the anticipated harms of AI-enabled surveillance, it re-scoped its object detection goals from “vehicle blight,” lived-in vehicles, and encampments to a focus on road safety, which had emerged as a resident priority during the impact assessment.
Algorithmic impact assessments are designed to structure deliberation, surface concerns, and produce documented evidence. They are useful for convening impacted communities and other stakeholders, formalizing and reporting on conversations that might otherwise not happen. However, this work is largely procedural. Unlike regulation, litigation, or investigative reporting, they rarely contain mechanisms that compel organizations to act on findings. Their influence depends on political will and the presence of external pressures that give their recommendations weight.
In this case, investigative journalism proved to be an important tool in effecting a change to the scope of the program, and encouraging the city to take action. It also shed light on important lessons for AIA practice and algorithmic accountability more broadly. To be more effective in catalyzing change, we believe that future AIA work should:
Algorithmic impact assessments are usually conducted by (and with) system developers and deployers; they cannot on their own guarantee redress or halt deployments. As this case study shows, organizations are often more responsive to political backlash and legal liability than to documented community concerns. Journalism, in particular, can galvanize public opinion at scales that assessments alone cannot, forcing institutions to confront questions of power, surveillance, and harm. Our partners in San José underscored that AIAs can be a tool to help prevent public backlash, because they involve working with community-based organizations at the outset. While we believe this case underscores the limits of AIA as a change mechanism in isolation, it also points to the importance of a healthy accountability ecosystem that includes tools such as journalism, open data, public records, and whistleblower protections. Each of these are as essential to responsible AI practice as impact assessment; the more tools we have, the better.