featured


filtered by: manipulation


D&S Media Manipulation Research Lead Joan Donovan talks about the role of large tech companies in curbing extremist activity online.

Joan Donovan, a media manipulation research lead at the research institute Data & Society, said it’s well within these companies’ reach to implement changes that will curb white supremacist activity. And it’s something she said major platforms like Facebook and Twitter will have to confront as they acknowledge their role in magnifying hate speech and those who spout it.

‘Richard Spencer might have a megaphone and his own website to communicate his messages of hate,’ Donovan said in a phone interview Wednesday. ‘Now these platforms are realizing they are the megaphone. They are the conduit between him and larger audiences.’

Movements like the so-called ‘alt-right’ aren’t just built on charisma, Donovan added — they’re built on infrastructure. The internet and all of its possibilities has now become a major part of that infrastructure.


Quartz cites D&S Postdoctoral Scholar Caroline Jack in their guide to Lexicon of Lies:

Problematic information comes in various forms, each uniquely irksome. Yet people are quick to blast all inaccuracies as “fake news,” reinforcing the sense that facts are a thing of the past.

That’s dangerous and it needn’t be the case, according to the Lexicon of Lies, a recent report from the New York-based Data and Society research institute. “The words we choose to describe media manipulation can lead to assumptions about how information spreads, who spreads it, and who receives it,” writes Caroline Jack, a media historian and postdoctoral fellow at Data and Society. On a cultural level, “these assumptions can shape what kinds of interventions or solutions seem desirable, appropriate, or even possible,” she writes.


Teaching Tolerance | 08.17.14

What is the Alt-Right?

Becca Lewis

D&S Researcher Becca Lewis discusses the recruiting methodologies of the Alt-Right in Teaching Tolerance

‘Social media can be very powerful in shaping outlooks, but it doesn’t operate in a vacuum,’ explains Data & Society researcher Becca Lewis. ‘The shaping is coming from the other people using the platforms.’

The alt-right has a massive presence on social media and other channels where young people congregate. A Washington Post analysis identified 27,000 influential Twitter accounts associated with the alt-right, 13 percent of which are considered radical. Later, a George Washington University study found that white nationalist accounts in the United States have seen their follower counts grow by 600 percent since 2012.



report | 05.15.17

Media Manipulation and Disinformation Online

Alice Marwick and Rebecca Lewis

New Report Reveals Why Media Was Vulnerable to Radicalized Groups Online


Forvm | 05.05.17

The Koch Brothers at Villanova

Jack Flynn & Kinjal Dave

Jack Flynn and Data & Society Research Analyst Kinjal Dave examine the role of the Koch brothers at Villanova.

“In order to preserve academic freedom and integrity on campus, the university must introduce substantive measures improving transparency and visibility surrounding the influence of outside donors on campus.”


Julia Angwin, Jeff Larson, Lauren Kirchner, and Surya Mattu complete the Black Box series with an analysis of premiums and payouts in California, Illinois, Texas and Missouri that shows that some major insurers charge minority neighborhoods as much as 30 percent more than other areas with similar accident costs.

But a first-of-its-kind analysis by ProPublica and Consumer Reports, which examined auto insurance premiums and payouts in California, Illinois, Texas and Missouri, has found that many of the disparities in auto insurance prices between minority and white neighborhoods are wider than differences in risk can explain. In some cases, insurers such as Allstate, Geico and Liberty Mutual were charging premiums that were on average 30 percent higher in zip codes where most residents are minorities than in whiter neighborhoods with similar accident costs.


D&S researcher danah boyd discusses the problem with asking companies like Facebook and Google to ‘solve’ fake news – boyd insists the context of complex social problems are missing in this problematic solutionism of solving fake news.

Although a lot of the emphasis in the “fake news” discussion focuses on content that is widely spread and downright insane, much of the most insidious content out there isn’t in your face. It’s not spread widely, and certainly not by people who are forwarding it to object. It’s subtle content that is factually accurate, biased in presentation and framing, and encouraging folks to make dangerous conclusions that are not explicitly spelled out in the content itself.


Deutsche Welle | 01.25.17

Fake news is a red herring

Ethan Zuckerman

D&S advisor Ethan Zuckerman writes about fake news and the bigger problem behind fake news.

The truly disturbing truth is that fake news isn’t the cause of our contemporary political dysfunction. More troublingly, we live in a world where people disagree deeply and fundamentally about how to understand it, even when we share the same set of facts. Solving the problems of fake news make that world slightly easier to navigate, but they don’t scratch the surface of the deeper problems of finding common ground with people with whom we disagree.


In “Did Media Literacy Backfire?” D&S founder danah boyd argues that the thorny problems of fake news and the spread of conspiracy theories have, in part, origins in efforts to educate people against misinformation. At the heart of the problem are deeper cultural divides that we must learn how to confront.


D&S advisor Baratunde Thurston details his exploration of The Glass Room exhibit.

I want to see The Glass Room everywhere there is an Apple Store…And anyone founding or working for a tech company should have to prove they’ve gone through this space and understood its meaning.


D&S founder danah boyd’s prepared remarks for a public roundtable in the European Parliament on algorithmic accountability and transparency in the digital economy were adapted in this Points piece.

I believe that algorithmic transparency creates false hope. Not only is it technically untenable, but it obfuscates the real politics that are at stake.


D&S researcher Alex Rosenblat wrote this piece narrating her many interviews with Uber drivers around the country. In this article, Rosenblat highlights many aspects of Uber drivers’ work and lives, including working in different regional contexts, anxieties around information privacy, and learning English on the job.

Just because software is universally deployable, though, doesn’t mean that work is experienced the same way everywhere, for everyone. The app works pretty much the same way in different places, and produces a workforce that behaves relatively homogeneously to give passengers a reliable experience — it’s easy to come away with the impression that the work experience is standardized, too.


What the journalists from SourceFed may have stumbled upon was not an instance in which search results were intentionally being manipulated in favor of a candidate, but how algorithms can reflect complex jurisdictional issues and international policies that can, in turn, govern content.

Points/spheres: D&S researcher Robyn Caplan asks: What drives Google’s policy of “not show[ing] a predicted query that is offensive or disparaging when displayed in conjunction with a person’s name?”


D&S Fellow Natasha Singer looks into exploitative interactive website design techniques known as “dark patterns”.

Persuasive design is a longstanding practice, not just in marketing but in health care and philanthropy. Countries that nudge their citizens to become organ donors — by requiring them to opt out if they don’t want to donate their body parts — have a higher rate of participation than the United States, where people can choose to sign up for organ donation when they obtain driver’s licenses or ID cards.

But the same techniques that encourage citizens to do good may also be used to exploit consumers’ cognitive biases. User-experience designers and marketers are well aware that many people are so eager to start using a new service or complete a task, or are so loath to lose a perceived deal, that they will often click one “Next” button after another as if on autopilot — without necessarily understanding the terms they have agreed to along the way.

“That’s when things start to drift into manipulation,” said Katie Swindler, director of user experience at FCB Chicago, an ad agency. She and Mr. Brignull are part of an informal effort among industry experts trying to make a business case for increased transparency.


In this background primer, D&S Research Analyst Laura Reed and D&S Founder danah boyd situate the current debate around the role of technology in the public sphere within a historical context. They identify and tease out some of the underlying values, biases, and assumptions present in the current debate surrounding the relationship between media and democracy, and connect them to existing scholarship within media history that is working to understand the organizational, institutional, social, political, and economic factors affecting the flow of news and information. They also identify a set of key questions to keep in mind as the conversation around technology and the public sphere evolves.

Algorithms play an increasingly significant role in shaping the digital news and information landscape, and there is growing concern about the potential negative impact that algorithms might have on public discourse. Examples of algorithmic biases and increasingly curated news feeds call into question the degree to which individuals have equal access to the means of producing, disseminating, and accessing information online. At the same time, these debates about the relationship between media, democracy, and publics are not new, and linking those debates to these emerging conversations about algorithms can help clarify the underlying assumptions and expectations. What do we want algorithms to do in an era of personalization? What does a successful algorithm look like? What form does an ideal public sphere take in the digital age? In asking these and other questions, we seek to highlight what’s at stake in the conversation about algorithms and publics moving forward.


D&S Research Analyst Laura Reed and D&S Researcher Robyn Caplan put together a set of case studies to complement the contemporary issues primer, Mediation, Automation, and Power, for the Algorithms and Publics project. These case studies explore situations in which algorithmic media is shaping the public sphere across a variety of dimensions, including the changing role of the journalism industry, the use of algorithms for censorship or international compliance, how algorithms are functioning within foreign policy aims, digital gerrymandering, the spread of misinformation, and more.



D&S Researcher Tim Hwang and Samuel Woolley consider the larger trend toward automated politics and the likely future sophistication of automated politics and potential impacts on the public sphere in the era of social media.

Political bots are challenging in part because they are dual-use. Even though many of the bot deployments we see are designed to manipulate social media and suppress discourse, bots aren’t inherently corrosive to the public sphere. There are numerous examples of bots deployed by media organizations, artists, and cultural commentators oriented toward raising awareness and autonomously “radiating” relevant news to the public. For instance, @stopandfrisk tweets information on the every instance of stop-and-frisk in New York City in order to highlight the embattled policy. On the other hand, @staywokebot sends messages related to the Black Lives Matter movement.

This is true of bots in general, even when they aren’t involved in politics. Intelligent systems can be used for all sorts of beneficial things—they can conserve energy and can even save lives—but they can also be used to waste resources and forfeit free speech. Ultimately, the real challenge doesn’t lie in some inherent quality of the technology but the incentives that encourage certain beneficial or harmful uses.

The upshot of this is that we should not simply block or allow all bots—the act of automation alone poses no threat to open discourse online. Instead, the challenge is to design a regime that encourages positive uses while effectively hindering negative uses.


University of Pennsylvania Law Review | 03.02.16

Accountable Algorithms

Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu

D&S Affiliate Solon Barocas and Advisors Edward W. Felten and Joel Reidenberg collaborate on a paper outlining the importance of algorithmic accountability and fairness, proposing several tools that can be used when designing decision-making processes.

Abstract: Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.