featured


filtered by: law


How will the introduction of AI into the field of medicine affect the doctor-patient relationship? Data & Society Fellow Claudia Haupt identifies some legal questions we should be asking.

“I contend that AI will not entirely replace human doctors (for now) due to unresolved issues in transposing diagnostics to a non-human context, including both limits on the technical capability of existing AI and open questions regarding legal frameworks such as professional duty and informed consent.”


D&S lawyer-in-residence Rebecca Wexler describes the intersection of automated technologies, trade secrets, and the criminal justice system.

For-profit companies dominate the criminal justice technologies industry and produce computer programs that are widespread throughout the justice system. These automated programs deploy cops, analyze forensic evidence, and assess the risk levels of inmates. But these technological advances may be making the system less fair, and without access to the source code, it’s impossible to hold computers to account.


Washington Monthly | 06.13.17

Code of Silence

Rebecca Wexler

D&S lawyer-in-residence Rebecca Wexler unpacks how private companies hide flaws in software that the government uses to convict and exonerate people in the criminal justice system.

What’s alarming about protecting trade secrets in criminal cases is that it allows private companies to withhold information not from competitors, but from individual defendants like Glenn Rodríguez. Generally, a defendant who wants to see evidence in someone else’s possession has to show that it is likely to be relevant to his case. When the evidence is considered “privileged,” the bar rises: he often has to convince the judge that the evidence could be necessary to his case—something that’s hard to do when, by definition, it’s evidence the defense hasn’t yet seen.


Columbia Law Review | 03.07.17

The Taking Economy: Uber, Information, and Power

Ryan Calo, Alex Rosenblat

Ryan Calo and D&S researcher Alex Rosenblat write this analysis of the newly termed ‘taking economy’ of Uber.

Sharing economy firms such as Uber and Airbnb facilitate trusted transactions between strangers on digital platforms. This creates economic and other value and raises a set of concerns around racial bias, safety, and fairness to competitors and workers that legal scholarship has begun to address. Missing from the literature, however, is a fundamental critique of the sharing economy grounded in asymmetries of information and power. This Article, coauthored by a law professor and a technology ethnographer who studies the ride-hailing community, furnishes such a critique and indicates a path toward a meaningful response.

Commercial firms have long used what they know about consumers to shape their behavior and maximize profits. By virtue of sitting between consumers and providers of services, however, sharing economy firms have a unique capacity to monitor and nudge all participants—including people whose livelihood may depend on the platform. Much activity is hidden away from view, but preliminary evidence suggests that sharing economy firms may already be leveraging their access to information about users and their control over the user experience to mislead, coerce, or otherwise disadvantage sharing economy participants.

This Article argues that consumer protection law, with its longtime emphasis of asymmetries of information and power, is relatively well positioned to address this under-examined aspect of the sharing economy. But the regulatory response to date seems outdated and superficial. To be effective, legal interventions must (1) reflect a deeper understanding of the acts and practices of digital platforms and (2) interrupt the incentives of sharing economy firms to abuse their position.


points | 10.26.16

Models in Practice

Angèle Christin

D&S affiliate Angèle Christin writes a response piece to Cathy O’Neil’s Weapons of Math Destruction.

One of the most striking findings of my research so far is that there is often a major gap between what the top administrations of criminal courts say about risk scores and the ways in which judges, prosecutors, and court officers actually use them. When asked about risk scores, higher-ups often praise them unequivocally. For them, algorithmic techniques bear the promise of more objective sentencing decisions. They count on the instruments to help them empty their jails, reduce racial discrimination, and reduce expenses. They can’t get enough of them: most courts now rely on as many as four, five, or six different risk-assessment tools.

Yet it is unclear whether these risk scores always have the meaningful effect on criminal proceedings that their designers intended. During my observations, I realized that risk scores were often ignored. The scores were printed out and added to the heavy paper files about defendants, but prosecutors, attorneys, and judges never discussed them. The scores were not part of the plea bargaining and negotiation process. In fact, most of judges and prosecutors told me that they did not trust the risk scores at all. Why should they follow the recommendations of a model built by a for-profit company that they knew nothing about, using data they didn’t control? They didn’t see the point. For better or worse, they trusted their own expertise and experience instead.


ProPublica | 05.23.16

Machine Bias: Risk Assessments in Criminal Sentencing

Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, ProPublica

D&S fellow Surya Mattu investigated bias in risk assessments, algorithmically generated scores predicting the likelihood of a person committing a future crime. These scores are increasingly used in courtrooms across America to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to fundamental decisions about a defendant’s freedom:

We obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.

The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

  • The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
  • White defendants were mislabeled as low risk more often than black defendants.

Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.


D&S Researcher Alex Rosenblat on the fallout of the Austin Transportation’s showdown with Uber and Lyft:

Uber allied with Lyft in Austin to lobby against an ordinance passed by the city council which requires ridehail drivers to undergo fingerprint-based background checks. The two companies spent $8.1 million combined to encourage (i.e. bombard with robo-texts) Austin voters to oppose the ordinance in a referendum vote called Proposition 1. If local cities take a stand against Uber or Lyft’s demand about background checks, and they prevail, that could produce a ripple effect in other cities that have regulatory demands. The local impact on Austin is a secondary concern to the global and national ambitions of imperial Uber and parochial Lyft. When they lost the vote on Prop. 1, they followed through on their threats to withdraw their services.


D&S Advisor Susan Crawford argues that the President is on shaky legal ground in the FBI vs. Apple showdown:

The problem for the president is that when it comes to the specific battle going on right now between Apple and the FBI, the law is clear: twenty years ago, Congress passed a statute, the Communications Assistance for Law Enforcement Act (CALEA) that does not allow the government to tell manufacturers how to design or configure a phone or software used by that phone — including security software used by that phone.

CALEA was the subject of intense negotiation — a deal, in other words. The government won an extensive, specific list of wiretapping assistance requirements in connection with digital communications. But in exchange, in Section 1002 of that act, the Feds gave up authority to “require any specific design of equipment, facilities, services, features or system configurations” from any phone manufacturer. The government can’t require companies that build phones to come to it for clearance in advance of launching a new device. Nor can the authorities ask a manufacturer to design something new — like a back door — once that device is out.


D&S Advisor Joel Reidenberg considers the scope of the court order compelling Apple to provide “reasonable technical assistance” to help the government hack into one of the San Bernadino attacker’s locked iPhone.

In short, for government to legitimately circumvent device encryption through a court order, legal authorization to access the contents of the device (typically through a judicial warrant) is necessary. Then, if the equipment manufacturer has control over the encryption, the decryption should be performed by the manufacturer with the results provided to the government.

If, instead, the equipment manufacturer only has control over information necessary to decrypt the device, the information should be provided to the government under strict court seal and supervision for a one-time limited use.

If neither circumstance applies, then unless Congress says otherwise, the equipment manufacturer should not be compelled to assist.

The bottom line is that the government should have an ability to compel companies to unlock encrypted devices for access to evidence of crimes, but should not be able to force companies to build electronic skeleton keys, new access tools and security vulnerabilities.


D&S fellow Natasha Singer explores the differences in how the United States and Europe treat data protection and privacy.

In the United States, a variety of laws apply to specific sectors, like health and credit. In the European Union, data protection is considered a fundamental right, which can have far-reaching consequences in all 28 member states.

All the talk about data privacy can get caught up in political wrangling. But the different approaches have practical consequences for people, too.


“In a self-driving car, the control of the vehicle is shared between the driver and the car’s software. How the software behaves is in turn controlled — designed — by the software engineers. It’s no longer true to say that the driver is in full control… Nor does it feel right to say that the software designers are entirely control.
“Yet as control becomes distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s a crash, we intuitively — and our laws, in practice — want someone to take the blame.
“The result of this ambiguity is that humans may emerge as ‘liability sponges’ or ‘moral crumple zones.'”

At Data & Society’s Intelligence and Autonomy forum in March 2015, “moral crumple zone” emerged as a useful shared term for the way the “human in the loop” is saddled with liability in the failure of an automated system.

In this essay in Quartz, Madeleine Clare Elish and Tim Hwang explore the problematic named by “moral crumple zone,” with reference to cruise control, self-driving cars, and autopilot.


magazine article | 05.27.15

What Amazon Taught the Cops

Ingrid Burrington

D&S artist in residence Ingrid Burrtington writes about the history of the term “predictive policing”, the pressures on police forces that are driving them to embrace data-driven policing, and the many valid causes for concern and outrage among civil-liberties advocates around these techniques and tactics.

It’s telling that one of the first articles to promote predictive policing, a 2009 Police Chief Magazine piece by the LAPD’s Charlie Beck and consultant Colleen McCue, poses the question “What Can We Learn From Wal-Mart and Amazon About Fighting Crime in a Recession?” The article likens law enforcement to a logistics dilemma, in which prioritizing where police officers patrol is analogous to identifying the likely demand for Pop-Tarts. Predictive policing has emerged as an answer to police departments’ assertion that they’re being asked to do more with less. If we can’t hire more cops, the logic goes, we need these tools to deploy them more efficiently.

 


Excerpt: “What’s more, metaphors matter because they shape laws and policies about data collection and use. As technology advances, law evolves (slowly, and somewhat clumsily) to accommodate new technologies and social norms around them. The most typical way this happens is that judges and regulators think about whether a new, unregulated technology is sufficiently like an existing thing that we already have rules about—and this is where metaphors and comparisons come in.”


Data & Society affiliate Kate Crawford comments on a court case in which a law firm is using data from a plaintiff’s Fitbit in support of her personal injury claim and explores the implications of elective self-tracking technologies for “truth” in legal proceedings.


Unionization emerged as a way of protecting labor rights when society shifted from an agricultural ecosystem to one shaped by manufacturing and industrial labor. New networked work complicates the organizing mechanisms that are inherent to unionization. How then do we protect laborers from abuse, poor work conditions, and discrimination?

This document was produced as a part of the Future of Work Project at Data & Society Research Institute. This effort is supported by the Open Society Foundations’ U.S. Programs Future of Work inquiry, which is bringing together a cross-disciplinary and diverse group of thinkers to address some of the biggest questions about how work is transforming and what working will look like 20-30 years from now. The inquiry is exploring how the transformation of work, jobs and income will affect the most vulnerable communities, and what can be done to alter the course of events for the better.


D&S fellow Karen Levy writes about the nation’s trucking system and the need for reform. Based on her three years of research around trucker’s compliance with federal regulations, she argues the changes must address root economic causes underlying a range of unsafe practices and that electronic monitoring is an incomplete solution to a serious public safety problem.

Truckers don’t work without sleep for dangerously long stretches (as many acknowledge having done) because it’s fun. They do it because they have to earn a living. The market demands a pace of work that many drivers say is impossible to meet if they’re “driving legal.”

If we want safer highways and fewer accidents, we must also attend to the economic realities that drive truckers to push their limits.


working paper | 05.14.14

Networked Rights and Networked Harms

Karen Levy, danah boyd

(Conference draft). “Networked Rights and Networked Harms.” Presented at Privacy Law School Conference (June 6, 2014) and Data & Discrimination (May 14, 2014).

The goal of this paper (far from a finished product, filled with gaps in logic and argumentation) is to imagine and interrogate two interwoven concepts of “networked rights” and “networked harms,” to bridge conversations in law, policy, and social science.

To learn more or read the draft, contact Karen and danah.


Abstract: The rise of “Big Data” analytics in the private sector poses new challenges for privacy advocates. Through its reliance on existing data and predictive analysis to create detailed individual profiles, Big Data has exploded the scope of personally identifiable information (“PII”). It has also effectively marginalized regulatory schema by evading current privacy protections with its novel methodology. Furthermore, poor execution of Big Data methodology may create additional harms by rendering inaccurate profiles that nonetheless impact an individual’s life and livelihood. To respond to Big Data’s evolving practices, this Article examines several existing privacy regimes and explains why these approaches inadequately address current Big Data challenges. This Article then proposes a new approach to mitigating predictive privacy harms—that of a right to procedural data due process. Although current privacy regimes offer limited nominal due process-like mechanisms, a more rigorous framework is needed to address their shortcomings. By examining due process’s role in the Anglo-American legal system and building on previous scholarship about due process for public administrative computer systems, this Article argues that individuals affected by Big Data should have similar rights to those in the legal system with respect to how their personal data is used in such adjudications. Using these principles, this Article analogizes a system of regulation that would provide such rights against private Big Data actors.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.