The New York City Department of Consumer and Worker Protection (“DCWP” or “Department”) is engaged in a second round of rule-making to implement new legislation governing the use of automated employment decision tools (AEDTs) within New York City. That law, Local Law 144 (LL144), requires employers who deploy these systems to transparently account for how their systems behave toward the actual population of job seekers, notifying the public and applicants of their use, and giving applicants the right to request alternative methods. It is one of the first laws in the world to mandate an independent audit of any algorithmic systems for bias.
A proposed rule drafted by the Department would specify how the City will implement and enforce this consequential law. Data & Society believes the rule should be revised to offer a greater degree of protection to the many job-seekers who are subjected to algorithmic scoring mechanisms and their attendant biases.
In our public comment, we recognize that local lawmakers were right to identify that AEDTs have adverse consequences for jobseekers from a variety of communities. We also express concern that some of the details of the proposed rule could blunt the efficacy of the law. For example, we are concerned that the rule:
- Defines automated employment tools too narrowly. LL144 specifies that the Council’s intent was to cover all AEDTs that “substantially assist or replace” human judgment in hiring and promotion processes. Yet the DCWP’s proposed rules appear to only cover AEDTs that replace human judgment either entirely or predominantly. But AEDTs typically are not used for final hiring or promotion decisions — instead, they are used to identify pools of candidates. In practice, in the vast majority of these systems available on the commercial market, humans still make the final decisions after the algorithmic system has sorted and scored applicants. Some large-size employers, like Amazon warehouses, may use fully-automated systems, but the current definition in the law may not apply to most employers currently deploying AEDTs. We suspect that most job seekers who are algorithmically scored would not be protected under these rules, which we do not believe was the intent of the City Council in passing LL144.
- Misunderstands how machine learning bias is propagated. The proposed rule defines “machine learning, statistical modeling, data analytics, or artificial intelligence” in a way that is overly narrow; as a consequence, the rule reveals a misunderstanding of how bias operates in machine learning. The definition used in the proposed rule appears to rely on an inaccurate understanding of the techniques of machine learning, over-emphasizing deep learning techniques and unstructured training data. However, most AEDTs are trained on, and use as inputs, highly structured resume data, and rely on human decisions in the model building — all of which is also vulnerable to bias. Potential routes to biased hiring and promotion decisions could be technically excluded, which we believe would ultimately circumvent the intention behind the law. The rule-making would better represent the intent of the law, and protect more job-seekers, if it kept the definition simpler and covered any use of algorithmic scoring rather than emphasizing techniques used to build the machine learning models.
- Restricts bias audits to gender and race/ethnicity. The proposed rule only requires bias auditing of gender and race/ethnicity features as defined by the US Federal Equal Opportunity Commission (EEOC). While these categories are correct to use, we argue that the EEOC categories should be a floor, not a ceiling, and that other categories should be considered over time.
Read our full comment here, or in the forthcoming public record compiled by the NYC DCWP.