featured


filtered by: criminal justice


Data & Society Postdoctoral Scholar Andrew Selbst argues for regulations in big data policing.

“The way police are adopting and using these technologies means more people of color are arrested, jailed, or physically harmed by police, while the needs of communities being policed are ignored.”


D&S lawyer-in-residence Rebecca Wexler describes the intersection of automated technologies, trade secrets, and the criminal justice system.

For-profit companies dominate the criminal justice technologies industry and produce computer programs that are widespread throughout the justice system. These automated programs deploy cops, analyze forensic evidence, and assess the risk levels of inmates. But these technological advances may be making the system less fair, and without access to the source code, it’s impossible to hold computers to account.


Washington Monthly | 06.13.17

Code of Silence

Rebecca Wexler

D&S lawyer-in-residence Rebecca Wexler unpacks how private companies hide flaws in software that the government uses to convict and exonerate people in the criminal justice system.

What’s alarming about protecting trade secrets in criminal cases is that it allows private companies to withhold information not from competitors, but from individual defendants like Glenn Rodríguez. Generally, a defendant who wants to see evidence in someone else’s possession has to show that it is likely to be relevant to his case. When the evidence is considered “privileged,” the bar rises: he often has to convince the judge that the evidence could be necessary to his case—something that’s hard to do when, by definition, it’s evidence the defense hasn’t yet seen.


D&S resident Rebecca Wexler describes the flaws of an increasingly automated criminal justice system

The root of the problem is that automated criminal justice technologies are largely privately owned and sold for profit. The developers tend to view their technologies as trade secrets. As a result, they often refuse to disclose details about how their tools work, even to criminal defendants and their attorneys, even under a protective order, even in the controlled context of a criminal proceeding or parole hearing.


D&S affiliate Desmond Patton breaks down how social media can lead to gun violence in this piece in The Trace.

Social media doesn’t allow for the opportunity to physically de-escalate an argument. Instead, it offers myriad ways to exacerbate a brewing conflict as opposing gangs or crews and friends and family take turns weighing in.


Harvard Business Review | 04.19.17

Creating Simple Rules for Complex Decisions

Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, Daniel G. Goldstein

Jongbin Jung, Connor Concannon, D&S fellow Ravi Shroff, Sharad Goel, and Daniel G. Goldstein explore new methods for machine learning in criminal justice.

Simple rules certainly have their advantages, but one might reasonably wonder whether favoring simplicity means sacrificing performance. In many cases the answer, surprisingly, is no. We compared our simple rules to complex machine learning algorithms. In the case of judicial decisions, the risk chart above performed nearly identically to the best statistical risk assessment techniques. Replicating our analysis in 22 varied domains, we found that this phenomenon holds: Simple, transparent decision rules often perform on par with complex, opaque machine learning methods.


paper | 04.02.17

Combatting Police Discrimination in the Age of Big Data

Sharad Goel, Maya Perelman, Ravi Shroff, David Alan Sklansky

Sharad Goel, Maya Perelman, D&S fellow Ravi Shroff, and David Alan Sklansky examine a method that can “reduce the racially disparate impact of pedestrian searches and to increase their effectiveness”. Abstract is below:

The exponential growth of available information about routine police activities offers new opportunities to improve the fairness and effectiveness of police practices. We illustrate the point by showing how a particular kind of calculation made possible by modern, large-scale datasets — determining the likelihood that stopping and frisking a particular pedestrian will result in the discovery of contraband or other evidence of criminal activity — could be used to reduce the racially disparate impact of pedestrian searches and to increase their effectiveness. For tools of this kind to achieve their full potential in improving policing, though, the legal system will need to adapt. One important change would be to understand police tactics such as investigatory stops of pedestrians or motorists as programs, not as isolated occurrences. Beyond that, the judiciary will need to grow more comfortable with statistical proof of discriminatory policing, and the police will need to be more receptive to the assistance that algorithms can provide in reducing bias.


D&S lawyer-in-residence Rebecca Wexler analyzes the unreliability of video authenticating in Slate.

When forensic scientists refuse to reveal details about how their experimental methods work, they erode trust in the ideal of scientific objectivity, and in the legitimacy of their results. There is already a dearth of trust surrounding forensic sciences. Just last fall, President Obama’s Council of Advisors on Science and Technology reported that even some long-practiced forensic disciplines, like bite-mark analysis and some methods for analyzing complex mixtures of DNA, are not foundationally valid.


D&S lawyer-in-residence Rebecca Wexler provides an analysis on trade secrecy in the criminal justice system. Abstract is below:

From policing to evidence to parole, data-driven algorithmic systems and other automated software programs are being adopted throughout the criminal justice system. The developers of these technologies often claim that the details about how the programs work are trade secrets and, as a result, cannot be disclosed in criminal cases. This Article turns to evidence law to examine the conflict between transparency and trade secrecy in the criminal justice system. It is the first comprehensive account of trade secret evidence in criminal cases. I argue that recognizing a trade secrets evidentiary privilege in criminal proceedings is harmful, ahistorical, and unnecessary. Withholding information from the accused because it is a trade secret mischaracterizes due process as a business competition.


D&S lawyer-in-residence Rebecca Wexler testifies about government oversight of forensic science laboratories in the State of New York.

I submit these comments to the Assembly Standing Committee on Codes; the Assembly Standing Committee on Judiciary and the Assembly Standing Committee on Oversight, Analysis and Investigation. Thank you for inviting my testimony on government oversight of forensic science laboratories in the State of New York. As a Resident at The Data and Society Research Institute, my work focuses on issues arising from data and technology in the criminal justice system. I want to draw your attention to trade secrets claims in forensic technologies that threaten criminal defendant’s rights to confront and cross-examine the evidence against them; to compulsory process to obtain evidence in their favor; and to due process.


D&S fellow Ravi Shroff examines Cathy O’Neil’s analysis of criminal justice algorithms, like predictive policing.

There are a few minor mischaracterizations and omissions in this chapter of Weapons of Math Destruction that I would have liked O’Neil to address. CompStat is not, as she suggests, a program like PredPol’s. This is a common misconception; CompStat is a set of organizational and management practices, some of which use data and software. In the section on stop-and-frisk, the book implies that a frisk always accompanies a stop, which is not the case; in New York, only about 60% of stops included a frisk. Moreover, the notion of “probable cause” is conflated with “reasonable suspicion,” which are two distinct legal standards. In the section on recidivism, O’Neil asks of prisoners,

“is it possible that their time in prison has an effect on their behavior once they step out? […] prison systems, which are awash in data, do not carry out this highly important research.”

Although prison systems may not conduct this research, there have been numerous academic studies that generally indicate a criminogenic effect of harsh incarceration conditions. Still, “Civilian Casualties” is a thought-provoking exploration of modern policing, courts, and incarceration. By highlighting the scale and opacity of WMDs in this context, as well as their vast potential for harm, O’Neil has written a valuable primer for anyone interested in understanding and fixing our broken criminal justice system.


points | 10.26.16

Models in Practice

Angèle Christin

D&S affiliate Angèle Christin writes a response piece to Cathy O’Neil’s Weapons of Math Destruction.

One of the most striking findings of my research so far is that there is often a major gap between what the top administrations of criminal courts say about risk scores and the ways in which judges, prosecutors, and court officers actually use them. When asked about risk scores, higher-ups often praise them unequivocally. For them, algorithmic techniques bear the promise of more objective sentencing decisions. They count on the instruments to help them empty their jails, reduce racial discrimination, and reduce expenses. They can’t get enough of them: most courts now rely on as many as four, five, or six different risk-assessment tools.

Yet it is unclear whether these risk scores always have the meaningful effect on criminal proceedings that their designers intended. During my observations, I realized that risk scores were often ignored. The scores were printed out and added to the heavy paper files about defendants, but prosecutors, attorneys, and judges never discussed them. The scores were not part of the plea bargaining and negotiation process. In fact, most of judges and prosecutors told me that they did not trust the risk scores at all. Why should they follow the recommendations of a model built by a for-profit company that they knew nothing about, using data they didn’t control? They didn’t see the point. For better or worse, they trusted their own expertise and experience instead.


D&S researcher Josh Scannell responds to Georgetown Center on Privacy & Technology’s “The Perpetual Line-Up” study.

Reports like “The Perpetual Line-Up” force a fundamental question: What do we want technologies like facial recognition to do? Do we want them to automate narrowly “unbiased” facets of the criminal justice system? Or do we want to end the criminal justice system’s historical role as an engine of social injustice? We can’t have both.


The mythology surrounding “big data” rests on the notion that technical systems can increase efficiency and decrease bias. Such “neutral” systems are supposedly good for implementing legal logic because, like these systems, law relies on binaries in decision-making, removing the gray and fuzzy from the equation. The problem with this formulation is that efficiency is not necessarily desirable, bias is baked into the data sets and reified technically as well as through interpretation, and legal binaries are neither socially productive nor logically sound.

D&S founder danah boyd responds to Margaret Hu’s work in Big Data Blacklisting with supportive arguments that further Hu’s assertions. boyd discusses how procedure and efficiency make algorithmic decision-making so attractive to policymakers and bureaucrats yet flawed systems in place do not make data neutral and in fact ‘blacklists purposefully distance decision-makers from the humanity of those who are being labeled’.


D&S advisor Ethan Zuckerman defends usage of video recording of police officers.

If video doesn’t lead to the indictment of officers who shoot civilians, are we wrong to expect justice from sousveillance? The police who shot Castille and Sterling knew they were likely to be captured on camera—from their police cars, surveillance cameras, and cameras held by bystanders—but still used deadly force in situations that don’t appear to have merited it. Is Mann’s hope for sousveillance simply wrong?

Not quite. While these videos rarely lead to grand jury indictments, they have become powerful fuel for social movements demanding racial justice and fairer policing. In the wake of Sterling and Castille’s deaths, protests brought thousands into the streets in major U.S. cities and led to the temporary closure of interstate highways.


The FBI recently announced its plan to request that their massive biometrics database, called the Next Generation Identification (NGI) system, be exempted from basic requirements under the Privacy Act. These exemptions would prevent individuals from finding out if they are included within the database, whether their profile is being shared with other government entities, and whether their profile is accurate or contains false information.Forty-four organizations, including Data & Society, sent a letter to the Department of Justice asking for a 30-day extension to review the proposal.

Points: In this Points original, Robyn Caplan highlights the First Amendment implications of the FBI’s request for exemptions from the Privacy Act for its Next Generation Identification system. Public comment on the FBI’s proposal is being accepted until July 6, 2016.


Code is key to civic life, but we need to start looking under the hood and thinking about the externalities of our coding practices, especially as we’re building code as fast as possible with few checks and balances.

Points: “Be Careful What You Code For” is danah boyd’s talk from Personal Democracy Forum 2016 (June 9, 2016); her remarks have been modified for Points. danah exhorts us to mind the externalities of code and proposes audits as a way to reckon with the effects of code in high stakes areas like policing. Video is available here.


ProPublica | 05.23.16

Machine Bias: Risk Assessments in Criminal Sentencing

Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, ProPublica

D&S fellow Surya Mattu investigated bias in risk assessments, algorithmically generated scores predicting the likelihood of a person committing a future crime. These scores are increasingly used in courtrooms across America to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to fundamental decisions about a defendant’s freedom:

We obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.


Researchers Alexandra Mateescu and Alex Rosenblat published a paper with D&S Founder danah boyd examine police-worn body cameras and their potential to provide avenues for police accountability and foster improved policy-community relations. The authors raise concerns about potential harmful consequences of constant surveillance that has sparked concerns from civil rights groups about how body-worn cameras may violate privacy and exacerbate existing police practices that have historically victimized people of color and vulnerable populations. They consider whether one can demand greater accountability without increased surveillance at the same time and suggest that “the trajectory laid out by body-worn cameras towards greater surveillance is clear, if not fully realized, while the path towards accountability has not yet been adequately defined, let alone forged.”

The intimacy of body-worn cameras’ presence—which potentially enables the recording of even mundane interpersonal interactions with citizens—can be exploited with the application of technologies like facial recognition; this can exacerbate existing practices that have historically victimized people of color and vulnerable populations. Not only do such technologies increase surveillance, but they also conflate the act of surveilling citizens with the mechanisms by which police conduct is evaluated. Although police accountability is the goal, the camera’s view is pointed outward and away from its wearer, and audio recording captures any sounds within range. As a result, it becomes increasingly difficult to ask whether one can demand greater accountability without increased surveillance at the same time.

Crafting better policies on body-worn camera use has been one of the primary avenues for balancing the right of public access with the need to protect against this technology’s invasive aspects. However, no universal policies or norms have been established, even on simple issues such as whether officers should notify citizens that they are being recorded. What is known is that body-worn cameras present definite and identifiable risks to privacy. By contrast, visions of accountability have remained ill-defined, and the role to be played by body-worn cameras cannot be easily separated from the wider institutional and cultural shifts necessary for enacting lasting reforms in policing. Both the privacy risks and the potential for effecting accountability are contingent upon an ongoing process of negotiation, shaped by beliefs and assumptions rather than empirical evidence.


Data & Society Researcher Alexandra Mateescu dives into the implications of electronic monitoring in the criminal justice system.

“Enforcement of public safety and order is sometimes less about eradicating crime than it is about shuffling those deemed a risk out of certain spaces and into others.”


primer | 02.24.15

Police Body-Worn Cameras – Updated

Alexandra Mateescu, Alex Rosenblat, danah boyd (with support from Jenna Leventoff and David Robinson)

In the wake of the police shooting of Michael Brown in August 2014, as well as the subsequent protests in Ferguson, Missouri and around the country, there has been a call to mandate the use of body-worn cameras to promote accountability and transparency in police-civilian interactions. Both law enforcement and civil rights advocates are excited by the potential of body-worn cameras to improve community policing and safety, but there is no empirical research to conclusively suggest that these will reduce the deaths of black male civilians in encounters with police. There are some documented milder benefits evident from small pilot studies, such as more polite interactions between police and civilians when both parties are aware they are being recorded, and decreased fraudulent complaints made against officers. Many uncertainties about best practices of body-worn camera adoption and use remain, including when the cameras should record, what should be stored and retained, who should have access to the footage, and what policies should determine the release of footage to the public. As pilot and permanent body-worn camera programs are implemented, it is important to ask questions about how they can be best used to achieve their touted goals. How will the implementation of these programs be assessed for their efficacy in achieving accountability goals? What are the best policies to have in place to support those goals?

The primer on police body-worn cameras was written in February 2015. We provided an update on what has happened in the past year with regard to the use of body-worn cameras across the US (the update can be read here) for the 2015 Data & Civil Rights Conference, A New Era of Policing and Justice.


working paper | 07.16.15

Certifying and removing disparate impact

Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian

D&S fellow Sorelle Friedler and her research colleagues investigate the ways that algorithms make decisions in all aspects of our lives and whether or not we can determine if these algorithms are biased, involve illegal discrimination, or are unfair? In this paper, they introduce and address two problems with the goals of quantifying and then removing disparate impact.

Abstract: What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process.
When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses.
We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.


On May 19, 2015 a group of about 20 individuals gathered at New America in Washington, DC for a discussion co-hosted by The Leadership Conference on Civil and Human Rights, Data & Society Research Institute, Upturn, and New America’s Open Technology Institute. The group was composed of technologists, researchers, civil rights advocates, and law enforcement representatives with the goal to broaden the discussion surrounding police worn body cameras within their respective fields and to understand the various communities’ interests and concerns. The series of discussions focused on what the technology behind police cameras consists of, how the cameras can be implemented to protect civil rights and public safety, and what the consequences of implementation might be.

(CC BY-ND 2.0-licensed photo by Diana Robinson.)


magazine article | 05.27.15

What Amazon Taught the Cops

Ingrid Burrington

D&S artist in residence Ingrid Burrtington writes about the history of the term “predictive policing”, the pressures on police forces that are driving them to embrace data-driven policing, and the many valid causes for concern and outrage among civil-liberties advocates around these techniques and tactics.

It’s telling that one of the first articles to promote predictive policing, a 2009 Police Chief Magazine piece by the LAPD’s Charlie Beck and consultant Colleen McCue, poses the question “What Can We Learn From Wal-Mart and Amazon About Fighting Crime in a Recession?” The article likens law enforcement to a logistics dilemma, in which prioritizing where police officers patrol is analogous to identifying the likely demand for Pop-Tarts. Predictive policing has emerged as an answer to police departments’ assertion that they’re being asked to do more with less. If we can’t hire more cops, the logic goes, we need these tools to deploy them more efficiently.

 


other | 05.19.15

The Minutes of Marshall Jones

Gideon Lichfield

D&S fellow Gideon Lichfield’s short story for the Police Technology and Civil Rights Roundtable:

“The year is 2019, and body cams have become standard for patrol officers in most police departments in the US. The cams and the management software for their footage are provided by a patchwork of vendors, and each department uses its own variant of them, with its own rules and procedures.”


The Atlantic | 05.15.15

It’s Not Too Late to Get Body Cameras Right

danah boyd, Alex Rosenblat

Excerpt: “Police-worn body cameras are coming. Support for them comes from stakeholders who often take opposing views. Law enforcement wants them, many politicians are pushing for them, and communities that already have a strong police presence in their neighborhoods are demanding that the police get cameras now. Civil-rights groups are advocating for them. The White House is funding them. The public is in favor of them. The collective — albeit, not universal — sentiment is that body cameras are a necessary and important solution to the rising concerns about fatal encounters between police and black men.

“As researchers who have spent the last few months analyzing what is known about body cams, we understand the reasons for this consensus, but we’re nervous that there will be unexpected and undesirable outcomes. On one hand, we’re worried that these expensive technologies will do little to curb systemic abuse. But what really scares us is the possibility that they may magnify injustice rather than help eradicate it. We support safeguards being put in place. But the cameras are not a proven technology, and we’re worried that too much is hinging on them being a silver bullet to a very serious problem. Our concerns stem from three major issues:

  1. Technology doesn’t produce accountability.
  2. Removing discretion often backfires.
  3. Surveillance carries significant, hidden economic and social costs.”

magazine article | 08.15.05

The Problem With Police Body Cameras

Janet A. Vertesi

“But as history tells us, camera evidence does not an indictment make.”

D&S advisor, Janet Vertesi discusses the difficulty with visual evidence in criminal indictments and the power of visual suggestibility. Offering evidence as to why police worn body cameras may not be the panacea they have recently been portrayed as.


primer | 10.30.14

Data & Civil Rights: Criminal Justice Primer

Alex Rosenblat, Kate Wikelius, danah boyd, Seeta Peña Gangadharan, Corrine Yu

Discrimination and racial disparities persist at every stage of the U.S. criminal justice system, from policing to trials to sentencing. The United States incarcerates a higher percentage of its population than any of its peer countries, with 2.2 million people behind bars. The criminal justice system disproportionately harms communities of color: while they make up 30 percent of the U.S. population, they represent 60 percent of the incarcerated population. There has been some discussion of how “big data” can be used to remedy inequalities in the criminal justice system; civil rights advocates recognize potential benefits but remained fundamentally concerned that data-oriented approaches are being designed and applied in ways that also disproportionately harms those who are already marginalized by criminal justice processes.

This document is a workshop primer from Data & Civil Rights: Why “Big Data” is a Civil Rights Issue.


Countless highly accurate predictions can be made from trace data, with varying degrees of personal or societal consequence (e.g., search engines predict hospital admission, gaming companies can predict compulsive gambling problems, government agencies predict criminal activity). Predicting human behavior can be both hugely beneficial and deeply problematic depending on the context. What kinds of predictive privacy harms are emerging? And what are the implications for systems of oversight and due process protections? For example, what are the implications for employment, health care and policing when predictive models are involved? How should varied organizations address what they can predict?

This document is a workshop primer from The Social, Cultural & Ethical Dimensions of “Big Data”.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.