updates and ideas from the D&S community and beyond
Around the Institute
Discriminating Tastes: Customer Ratings as Vehicles for Bias
More new work from our Intelligence & Autonomy initiative: Alex Rosenblat, Karen Levy, Solon Barocas, and Tim Hwang examine how bias may creep into evaluations of Uber drivers through consumer-sourced rating systems:
“Through the rating system, consumers can directly assert their preferences and their biases in ways that companies are prohibited from doing on their behalf. The fact that customers may be racist, for example, does not license a company to consciously or even implicitly consider race in its hiring decisions. The problem here is that Uber can cater to racists, for example, without ever having to consider race, and so never engage in behavior that amounts to disparate treatment. In effect, companies may be able to perpetuate bias without being liable for it.”
D&S Workshop: Work, Labor, and Automation
On January 23, we will host a workshop on the intersection of technology and work/labor. Participation in this event is limited. Those who are interested in participating should apply by November 1.
Best Practices for Conducting Risky Research and Protecting Yourself from Online Harassment
Alice Marwick, Lindsay Blackwell, and Katherine Lo developed a set of best practices for researchers who wish to engage in research that may make them susceptible to online harassment. The team also created a 2-page information sheet that researchers can give to university personnel to educate them about the realities of online harassment and what administrators can do about it.
Points: Who does the hard work of bridging context and technical skills?
“For technology projects to be successful and have impact, we need to move past the binary of ‘hard’ and ‘soft’ skills, and recognise the value of people who combine context with technical knowledge.” —Zara Rahman
Student Data & Mental Health: What the Blank Spots Say
New from Enabling Connected Learning: Mikaela Pitcan looks into missing data on students’ mental health, as well as the implications of this missing data for education.
Around the Around
The Perpetual Line-Up
A thorough, year-long investigation by the Center on Privacy & Technology at Georgetown Law analyzes facial recognition systems in police departments. The study found that “facial recognition affects over 117 million American adults. A few agencies have instituted meaningful protections to prevent the misuse of the technology. In many more cases, it is out of control.”
A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear.
“It’s easy to get lost in the often technical back-and-forth between ProPublica and Northpointe, but at the heart of their disagreement is a subtle ethical question: What does it mean for an algorithm to be fair? Surprisingly, there is a mathematical limit to how fair any algorithm — or human decision-maker — can ever be.” —Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel