Announcement

Open Algorithms Law

Testimony by Data & Society to the NYC Council's Committee on Technology

Janet Haven,
Andrew Selbst

On April 4, 2019, Data & Society Executive Director Janet Haven and Postdoctoral Scholar Andrew Selbst testified before the New York City Council’s Committee on Technology about the Open Algorithms Law (Local Law 49 of 2018). They called for oversight of the Automated Decision Systems Task Force to ensure access to “details of ADS systems in use by specific agencies” and a public engagement process. 

Other testimonies include Task Force members Solon Barocas and Julia Stoyanovich andBetaNYC Executive Director Noel Hidalgo. For a video of the full hearing, click here.

Please find Janet Haven and Andrew Selbst’s written testimony below. 


Our names are Janet Haven and Andrew D. Selbst. We are the executive director and a postdoctoral scholar at the Data & Society Research Institute, an independent non-profit research center dedicated to studying the social and cultural impacts of data-driven and automated technologies. Over the past five years, Data & Society has focused on the social and legal impacts of automated decision-making and artificial intelligence, publishing research and advising policymakers and industry actors on issues such as algorithmic bias, explainability, transparency, and accountability more generally.

Government services and operations play a crucial role in the lives of New York City’s citizens. Transparency and accountability in a government’s use of automated decision-making systems matters. Across the country, automated decision-making systems based on nonpublic data sources and algorithmic models currently inform decision-making on policing, criminal justice, housing, child welfare, educational opportunities, and myriad other fundamental issues.

This Task Force was set up to begin the hard work of building transparent and accountable processes to ensure that the use of such systems in New York City is geared to just outcomes, rather than only those which are most efficient.  The adoption of such systems requires a reevaluation of current approaches to due process and the adoption of appropriate safeguards. It may require entirely new approaches to accountability when the city uses automated systems, as many such systems, through their very design, can obscure or conceal policy or decision-making processes.

We at Data & Society lauded the decision to establish a Task Force focused on developing a better understanding of these issues. Indeed, we celebrated the city leadership’s prescience in being the first government in the nation to establish a much-needed evidence base regarding the inherent complexity accompanying ADS adoption across multiple departments.  We have seen little evidence that the Task Force is living up to its potential. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body.

We want to make two main points in our testimony today.

First, for the Task Force to complete its mandate in any meaningful sense, it must have access to the details of ADS systems in use by specific agencies and the ability to work closely with representatives from across agencies using ADS.  We urge that task force members be given immediate access to specific, agency-level automated decision-making systems currently in use, as well as to the leadership in those departments, and others with insight into the design and use of these systems.

Social context is essential to defining fair and just outcomes.[1] The city is understood to be using ADS in such diverse contexts as housing, education, child services, and criminal justice. The very idea of a fair or just outcome is impossible to define or debate without reference to the social context. Understanding the different value tradeoffs in decisions about pretrial risk assessments tells you nothing whatsoever about school choice. What is fair, just, or accountable in public housing policy says nothing about what is fair, just, and accountable in child services. This ability to address technological systems within the social context where they are used is what makes the ADS Task Force so important, and potentially so powerful in defining real accountability measures.

The legislative mandate itself also demonstrates why the Task Force requires access to agency technologies. Under the enacting law, the purpose of the Task Force is to make recommendations particular to the City’s agencies.[2]  Specifically, the Task Force must make recommendations for procedures by which explanations of the decisions can be requested, biases can be detected, harms from biases can be redressed, the public can assess the ADS, and the systems and data can be archived.[3] Each of these recommendations apply not to automated decision systems generally, but to “agency automated decision systems,” a term defined separately in the test of the law.[4] Importantly, the law also mandates that the Task Force makes recommendations about “[c]riteria for identifying which agency automated decision systems” should be subject to these procedures.[5]  Thus, the legislative mandate makes clear that for the Task Force to do its work, it will require access to the technologies that city agencies currently use or plan to use, as well as the people in charge of their operation. Lacking this level of detail on actual agency-level use of automated decision-making systems, the recommendations can only be generic. Such generic recommendations will be ineffective because they will not be informative enough for the city to act on.

If the city wanted to find generic guidelines or recommendations for ADSs, it could have looked to existing scholarship on these issues instead of forming a Task Force. Indeed, there is an entire interdisciplinary field of scholarship that has emerged in the last several years, dedicated to the issues of Fairness, Accountability and Transparency (FAT*) in automated systems.[6] This field has made significant strides in coming up with mathematical definitions for fairness that computers can parse, and creating myriad potential methods for bias reduction in automated systems.

But the academic work has fundamental limitations. Much of the research is, by necessity or due to limited access, based on small hypothetical scenarios—toy problems—rather than real-world applications of machine learning technology.[7] This work is accomplished, as is characteristic of theoretical modeling, by stating assumptions about the world and datasets that are being used. In order to translate these solutions to the real world, researchers would have to know whether the datasets and other assumptions match the real-world scenarios.

Using information from city agencies, the task force has the ability to advance beyond the academic focus on toy problems devoid of social context and assess particular issues for systems used in practice. Without information about the systems in use, the Task Force’s recommendations will be limited to procedures at the greatest level of generality—things we already would guess, such as testing the system for bias or keeping it less complex so as to be explainable. But with information about these systems, the Task Force can examine the particular challenges and tradeoffs at issue. With community input and guidance, they can assess the appropriateness of different definitions of bias in a given context, and debate trade-offs between accuracy and explainability given specific social environments.  The recommendations of the Task Force will only be useful if they are concrete and actionable, and that can only be achieved if they are allowed to examine the way ADS operate in practice with a view into *both* the technical and the social systems informing outcomes.

Second, we urge the Task Force to prioritize public engagement. Because social context is essential to defining fair and just outcomes, meaningful engagement with community stakeholders is fundamental to this process. Once the Task Force has access to detailed information about ADS systems in use, public listening sessions must be held to understand community experiences and concerns with the goal of using that feedback to shape the Task Force’s process going forward. Iteration and reviewing of recommendations with community stakeholders as the Task Force moves this work forward will be important to arriving at truly transparent, accountable and just outcomes.

We are here today because we continue to believe the Task Force has great potential. We strongly believe that the Task Force’s work needs to be undertaken thoughtfully and contextually, centering on cooperation, transparency, and public engagement.  The Task Force’s goal needs to be offering actionable and concrete recommendations on the use of ADS in New York City government. We hope that the above testimony provides useful suggestions to move toward that goal.

Thank you.


[1] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), 59.

[2] See Local Law No. 49 of 2018, Council Int. No. 1696-A of 2017 [hereinafter Local Law 49] (repeatedly referring to “agency automated decision systems”).

[3] Id. §§ 3(b)–(f)

[4] Id. § 1(a).

[5] Id. § 3(a) (emphasis added).

[6] ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), https://fatconference.org/

[7] See generally Andrew D. Selbst et al., Fairness and Abstraction in Sociotechnical Systems, Proceedings of the 2019 ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), at 59.