featured


filtered by: manipulation


For Points, Data & Society Postdoctoral Scholar Caroline Jack reviews the history of advertising imaginaries.

“The question of what protections ads themselves deserve, and to what degree people deserve to be protected from ads, is ripe for reconsideration.”


In this Medium post, Founder and President danah boyd reflects on the current state of journalism and offers next steps.

“Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.”


How do people decide what to trust? Data & Society Postdoctoral Scholar Francesca Tripodi shares insights from her research into conservative news practices.

“While not all Christians are conservative nor all conservatives religious, there is a clear connection between how the process of scriptural inference trickles down into conservative methods of inquiry. Favoring the original text of the Constitution is closely tied to the practices of ‘constitutional conservatism,’ and currently members in all three branches of the U.S. government rely on practices of scriptural inference to make important political decisions.”


The Guardian | 06.01.18

The Case for Quarantining Extremist Ideas

danah boyd, Joan Donovan

Data & Society President and Founder danah boyd and Media Manipulation Research Lead Joan Donovan challenge newsrooms to practice “strategic silence” to avoid amplifying extremist messaging.

“Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.”


report | 05.22.18

The Oxygen of Amplification

Whitney Phillips

The Oxygen of Amplification draws on in-depth interviews by scholar Whitney Phillips to showcase how news media was hijacked from 2016-2018 to amplify the messages of hate groups.

Offering extremely candid comments from mainstream journalists, the report provides a snapshot of an industry caught between the pressure to deliver page views, the impulse to cover manipulators and “trolls,” and the disgust (expressed in interviewees’ own words) of accidentally propagating extremist ideology.


report | 05.16.18

Searching for Alternative Facts

Francesca Tripodi

Searching for Alternative Facts is an ethnographic account drawn directly from Dr. Francesca Tripodi’s research within upper-middle class conservative Christian* communities in Virginia in 2017. Dr. Tripodi uses Christian practices of Biblical interpretation as a lens for understanding the relationship between so-called “alternative” or “fake news” sources and contemporary conservative political thought.


Search plays a unique role in modern online information systems.

Unlike with social media, where users primarily consume algorithmically curated feeds of information, the typical approach to a search engine begins with a query or question in an effort to seek new information.

However, not all search queries are equal. There are many search terms for which the available relevant data is limited,  non-existent, or deeply problematic.

We call these “data voids.”

Data Voids: Where Missing Data Can Easily Be Exploited explores different types of data voids; the challenges that search engines face when they encounter queries over spaces where data voids exist; and the ways data voids can be exploited by those with ideological, economic, or political agendas.

Authors

Michael Golebiewski, Microsoft Bing

danah boyd, Microsoft Research and Data & Society


book | 03.01.18

Trump and the Media

Edited by Pablo J. Boczkowski and Zizi Papacharissi

D&S Founder danah boyd and Researcher Robyn Caplan contributed to the book “Trump and the Media,” which examines the role the media played in the election of Donald Trump.

Other contributors include: Mike Ananny, Chris W. Anderson, Rodney Benson, Pablo J. Boczkowski, Michael X. Delli Carpini, Josh Cowls, Susan J. Douglas, Keith N. Hampton, Dave Karpf, Daniel Kreiss, Seth C. Lewis, Zoey Lichtenheld, Andrew L. Mendelson, Gina Neff, Zizi Papacharissi, Katy E. Pearce, Victor Pickard, Sue Robinson, Adrienne Russell, Ralph Schroeder, Michael Schudson, Julia Sonnevend, Keren Tenenboim-Weinblatt, Tina Tucker, Fred Turner, Nikki Usher, Karin Wahl-Jorgensen, Silvio Waisbord, Barbie Zelizer.


Read and/or watch Data & Society Founder and President danah boyd’s keynote talk at SXSW EDU 2018.

“I get that many progressive communities are panicked about conservative media, but we live in a polarized society and I worry about how people judge those they don’t understand or respect. It also seems to me that the narrow version of media literacy that I hear as the “solution” is supposed to magically solve our political divide. It won’t. More importantly, as I’m watching social media and news media get weaponized, I’m deeply concerned that the well-intended interventions I hear people propose will backfire, because I’m fairly certain that the crass versions of critical thinking already have.”


Education Week interviewed Data & Society Media Manipulation Lead Joan Donovan about misinformation spread after the Parkland shooting.

“The problem with amplified speech online is that something like this crisis-actor narrative gets a lot of reach and attention, then the story becomes about that, and not the shooting or what these students are doing. I would suggest that media only mentions these narratives to say that this is wrong and that students need to be believed.”


report | 02.21.18

The Promises, Challenges, and Futures of Media Literacy

Monica Bulger and Patrick Davison

This report responds to the “fake news” problem by evaluating the successes and failures of recent media literacy efforts while pointing towards next steps for educators, legislators, technologists, and philanthropists.


report | 02.21.18

Dead Reckoning

Robyn Caplan, Lauren Hanson, and Joan Donovan

New Data & Society report clarifies current uses of “fake news” and analyzes four specific strategies for intervention.


video | 01.24.18

Are Internet Trolls Born or Made?

Amanda Lenhart, Kathryn Zickuhr

What are internet trolls? Above the Noise explains where internet trolls come from in this video and encourages watchers to read Data & Society’s report “Online Harassment, Digital Abuse, and Cyberstalking in America.”


Miranda Katz of WIRED interviews D&S founder and president danah boyd on the evolving public discourse around disinformation and how the tech industry can help rebuild American society.

“It’s actually really clear: How do you reknit society? Society is produced by the social connections that are knit together. The stronger those networks, the stronger the society. We have to make a concerted effort to create social ties, social relationships, social networks in the classic sense that allow for strategic bridges across the polis so that people can see themselves as one.”


Is Facebook a platform or a media company? NBC News THINK asks D&S researcher Robyn Caplan to comment on the recent tech hearings.

Facebook thinks of itself as a neutral platform where everyone can come and share ideas…They’re basically saying that they’re the neutral public sphere. That they are the marketplace of ideas, instead of being the marketers of ideas.”


This is a transcript of Data & Society founder and president danah boyd’s recent lightening talk at The People’s Disruption: Platform Co-Ops for Global Challenges.

“But as many of you know, power corrupts. And the same geek masculinities that were once rejuvenating have spiraled out of control. Today, we’re watching as diversity becomes a wedge issue that can be used to radicalize disaffected young men in tech. The gendered nature of tech is getting ugly.”


D&S Media Manipulation Research Lead Joan Donovan talks about the role of large tech companies in curbing extremist activity online.

Joan Donovan, a media manipulation research lead at the research institute Data & Society, said it’s well within these companies’ reach to implement changes that will curb white supremacist activity. And it’s something she said major platforms like Facebook and Twitter will have to confront as they acknowledge their role in magnifying hate speech and those who spout it.

‘Richard Spencer might have a megaphone and his own website to communicate his messages of hate,’ Donovan said in a phone interview Wednesday. ‘Now these platforms are realizing they are the megaphone. They are the conduit between him and larger audiences.’

Movements like the so-called ‘alt-right’ aren’t just built on charisma, Donovan added — they’re built on infrastructure. The internet and all of its possibilities has now become a major part of that infrastructure.


Quartz cites D&S Postdoctoral Scholar Caroline Jack in their guide to Lexicon of Lies:

Problematic information comes in various forms, each uniquely irksome. Yet people are quick to blast all inaccuracies as “fake news,” reinforcing the sense that facts are a thing of the past.

That’s dangerous and it needn’t be the case, according to the Lexicon of Lies, a recent report from the New York-based Data and Society research institute. “The words we choose to describe media manipulation can lead to assumptions about how information spreads, who spreads it, and who receives it,” writes Caroline Jack, a media historian and postdoctoral fellow at Data and Society. On a cultural level, “these assumptions can shape what kinds of interventions or solutions seem desirable, appropriate, or even possible,” she writes.


Teaching Tolerance | 08.17.14

What is the Alt-Right?

Becca Lewis

D&S Researcher Becca Lewis discusses the recruiting methodologies of the Alt-Right in Teaching Tolerance

‘Social media can be very powerful in shaping outlooks, but it doesn’t operate in a vacuum,’ explains Data & Society researcher Becca Lewis. ‘The shaping is coming from the other people using the platforms.’

The alt-right has a massive presence on social media and other channels where young people congregate. A Washington Post analysis identified 27,000 influential Twitter accounts associated with the alt-right, 13 percent of which are considered radical. Later, a George Washington University study found that white nationalist accounts in the United States have seen their follower counts grow by 600 percent since 2012.



report | 05.15.17

Media Manipulation and Disinformation Online

Alice Marwick and Rebecca Lewis

New Report Reveals Why Media Was Vulnerable to Radicalized Groups Online


Julia Angwin, Jeff Larson, Lauren Kirchner, and Surya Mattu complete the Black Box series with an analysis of premiums and payouts in California, Illinois, Texas and Missouri that shows that some major insurers charge minority neighborhoods as much as 30 percent more than other areas with similar accident costs.

But a first-of-its-kind analysis by ProPublica and Consumer Reports, which examined auto insurance premiums and payouts in California, Illinois, Texas and Missouri, has found that many of the disparities in auto insurance prices between minority and white neighborhoods are wider than differences in risk can explain. In some cases, insurers such as Allstate, Geico and Liberty Mutual were charging premiums that were on average 30 percent higher in zip codes where most residents are minorities than in whiter neighborhoods with similar accident costs.


D&S researcher danah boyd discusses the problem with asking companies like Facebook and Google to ‘solve’ fake news – boyd insists the context of complex social problems are missing in this problematic solutionism of solving fake news.

Although a lot of the emphasis in the “fake news” discussion focuses on content that is widely spread and downright insane, much of the most insidious content out there isn’t in your face. It’s not spread widely, and certainly not by people who are forwarding it to object. It’s subtle content that is factually accurate, biased in presentation and framing, and encouraging folks to make dangerous conclusions that are not explicitly spelled out in the content itself.


Deutsche Welle | 01.25.17

Fake news is a red herring

Ethan Zuckerman

D&S advisor Ethan Zuckerman writes about fake news and the bigger problem behind fake news.

The truly disturbing truth is that fake news isn’t the cause of our contemporary political dysfunction. More troublingly, we live in a world where people disagree deeply and fundamentally about how to understand it, even when we share the same set of facts. Solving the problems of fake news make that world slightly easier to navigate, but they don’t scratch the surface of the deeper problems of finding common ground with people with whom we disagree.


In “Did Media Literacy Backfire?” D&S founder danah boyd argues that the thorny problems of fake news and the spread of conspiracy theories have, in part, origins in efforts to educate people against misinformation. At the heart of the problem are deeper cultural divides that we must learn how to confront.


D&S advisor Baratunde Thurston details his exploration of The Glass Room exhibit.

I want to see The Glass Room everywhere there is an Apple Store…And anyone founding or working for a tech company should have to prove they’ve gone through this space and understood its meaning.


D&S founder danah boyd’s prepared remarks for a public roundtable in the European Parliament on algorithmic accountability and transparency in the digital economy were adapted in this Points piece.

I believe that algorithmic transparency creates false hope. Not only is it technically untenable, but it obfuscates the real politics that are at stake.


D&S researcher Alex Rosenblat wrote this piece narrating her many interviews with Uber drivers around the country. In this article, Rosenblat highlights many aspects of Uber drivers’ work and lives, including working in different regional contexts, anxieties around information privacy, and learning English on the job.

Just because software is universally deployable, though, doesn’t mean that work is experienced the same way everywhere, for everyone. The app works pretty much the same way in different places, and produces a workforce that behaves relatively homogeneously to give passengers a reliable experience — it’s easy to come away with the impression that the work experience is standardized, too.


What the journalists from SourceFed may have stumbled upon was not an instance in which search results were intentionally being manipulated in favor of a candidate, but how algorithms can reflect complex jurisdictional issues and international policies that can, in turn, govern content.

Points/spheres: D&S researcher Robyn Caplan asks: What drives Google’s policy of “not show[ing] a predicted query that is offensive or disparaging when displayed in conjunction with a person’s name?”


D&S Fellow Natasha Singer looks into exploitative interactive website design techniques known as “dark patterns”.

Persuasive design is a longstanding practice, not just in marketing but in health care and philanthropy. Countries that nudge their citizens to become organ donors — by requiring them to opt out if they don’t want to donate their body parts — have a higher rate of participation than the United States, where people can choose to sign up for organ donation when they obtain driver’s licenses or ID cards.

But the same techniques that encourage citizens to do good may also be used to exploit consumers’ cognitive biases. User-experience designers and marketers are well aware that many people are so eager to start using a new service or complete a task, or are so loath to lose a perceived deal, that they will often click one “Next” button after another as if on autopilot — without necessarily understanding the terms they have agreed to along the way.

“That’s when things start to drift into manipulation,” said Katie Swindler, director of user experience at FCB Chicago, an ad agency. She and Mr. Brignull are part of an informal effort among industry experts trying to make a business case for increased transparency.


In this background primer, D&S Research Analyst Laura Reed and D&S Founder danah boyd situate the current debate around the role of technology in the public sphere within a historical context. They identify and tease out some of the underlying values, biases, and assumptions present in the current debate surrounding the relationship between media and democracy, and connect them to existing scholarship within media history that is working to understand the organizational, institutional, social, political, and economic factors affecting the flow of news and information. They also identify a set of key questions to keep in mind as the conversation around technology and the public sphere evolves.

Algorithms play an increasingly significant role in shaping the digital news and information landscape, and there is growing concern about the potential negative impact that algorithms might have on public discourse. Examples of algorithmic biases and increasingly curated news feeds call into question the degree to which individuals have equal access to the means of producing, disseminating, and accessing information online. At the same time, these debates about the relationship between media, democracy, and publics are not new, and linking those debates to these emerging conversations about algorithms can help clarify the underlying assumptions and expectations. What do we want algorithms to do in an era of personalization? What does a successful algorithm look like? What form does an ideal public sphere take in the digital age? In asking these and other questions, we seek to highlight what’s at stake in the conversation about algorithms and publics moving forward.


D&S Research Analyst Laura Reed and D&S Researcher Robyn Caplan put together a set of case studies to complement the contemporary issues primer, Mediation, Automation, and Power, for the Algorithms and Publics project. These case studies explore situations in which algorithmic media is shaping the public sphere across a variety of dimensions, including the changing role of the journalism industry, the use of algorithms for censorship or international compliance, how algorithms are functioning within foreign policy aims, digital gerrymandering, the spread of misinformation, and more.



D&S Researcher Tim Hwang and Samuel Woolley consider the larger trend toward automated politics and the likely future sophistication of automated politics and potential impacts on the public sphere in the era of social media.

Political bots are challenging in part because they are dual-use. Even though many of the bot deployments we see are designed to manipulate social media and suppress discourse, bots aren’t inherently corrosive to the public sphere. There are numerous examples of bots deployed by media organizations, artists, and cultural commentators oriented toward raising awareness and autonomously “radiating” relevant news to the public. For instance, @stopandfrisk tweets information on the every instance of stop-and-frisk in New York City in order to highlight the embattled policy. On the other hand, @staywokebot sends messages related to the Black Lives Matter movement.

This is true of bots in general, even when they aren’t involved in politics. Intelligent systems can be used for all sorts of beneficial things—they can conserve energy and can even save lives—but they can also be used to waste resources and forfeit free speech. Ultimately, the real challenge doesn’t lie in some inherent quality of the technology but the incentives that encourage certain beneficial or harmful uses.

The upshot of this is that we should not simply block or allow all bots—the act of automation alone poses no threat to open discourse online. Instead, the challenge is to design a regime that encourages positive uses while effectively hindering negative uses.


University of Pennsylvania Law Review | 03.02.16

Accountable Algorithms

Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu

D&S Affiliate Solon Barocas and Advisors Edward W. Felten and Joel Reidenberg collaborate on a paper outlining the importance of algorithmic accountability and fairness, proposing several tools that can be used when designing decision-making processes.

Abstract: Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.