featured


filtered by: Featured on Initiative


report | 11.21.16

Online Harassment, Digital Abuse, and Cyberstalking in America

Amanda Lenhart, Michele Ybarra (CiPHR), Kathryn Zickuhr, Myeshia Price-Feeney (CiPHR)

The internet and digital tools play an increasingly central role in how Americans engage with their communities: How they find and share information; how they connect with their friends, family, and professional networks; how they entertain themselves; how they seek answers to sensitive questions; how they learn about—and access—the world around them. The internet is built on the ideal of the free flow of information, but it is also built on the ideal of free-flowing discourse.

However, one persistent challenge to this ideal has been online harassment and abuse—unwanted contact that is used to create an intimidating, annoying, frightening, or even hostile environment for the victim and that uses digital means to reach the victim. As with their traditional expressions, online harassment and abuse can affect many aspects of our digital lives. Even those who do not experience online harassment directly can see it and respond to its effects; even the threat of harassment can suppress the voices of many of our citizens.

In order to explore these issues and the ways that online environments affect our experiences online, this report examines American teens’ and adults’ experiences with witnessing, experiencing, and responding to the aftermath of online harassment and abuse.


Download: full report | methods | press release | Social Media Use by Americans, 2016 (Data Memo)

Additional Reports: Nonconsensual Image Sharing | Intimate Partner Digital Abuse


Acknowledgements

This report was made possible by a grant from the Digital Trust Foundation. The authors would like to thank the Foundation for their support of this project. In addition to the named authors, we want to acknowledge and thank the other individuals who contributed to this report: Hannah Madison, Emilie Chen, Chantel Gammage, Alexandra Mateescu, Angie Waller, Seth Young, and Shana Kimball. We would also like to thank our advisors and reviewers for their help in thinking through the questions to ask and their feedback on the report. Our advisors and reviewers include danah boyd, Monica Bulger, Maeve Duggan, Rachel Hartman, Amanda Levendowski, and the team at the Safety Net Technology Project at the National Network to End Domestic Violence.


teaching | 09.19.16

Supporting Ethics in Data Research

Emily Keller, Bonnie Tijerina, danah boyd
Background:

University campuses provide an ecosystem of support to technical researchers, including computer scientists, as they navigate emerging issues of privacy, ethics, security, and consent in big data research. These support systems have varying levels of coordination and may be implicit or explicit.

As part of the Supporting Ethics in Data Research project at Data & Society, we held workshops with twelve to sixteen student researchers,professors, information technology leaders, repository managers, and research librarians at a handful of universities. The goal was to tease out the individual components of ethical, technical, and legal support that were available or absent on each campus, and to better understand the interactions between different actors as they encounter common ethical quandaries.

Materials: sticky notes, scratch paper, pens, and markers
Downloads: Supporting_Ethics_Materials_Sept2016.zip
Exercises:

Case Study of a Technical Researcher: provides a fictional scenario involving a researcher who needs assistance navigating a number of obstacles during her technical research.

Data Clinic Model: facilitates a brainstorming session about the components needed for a drop-in clinic to offer peer and professional support.

Ethics Conversation: asks participants to link words, feelings, and thoughts to the word “ethics,” followed by a discussion.

Read More: For the results of this project, please see the final report, Supporting Ethical Data Research: An Exploratory Study of Emerging Issues in Big Data and Technical Research, which provides detailed findings.

D&S researcher Robyn Caplan co-wrote a paper analyzing how many large well-known companies, such as Buzzfeed and Facebook, argue against being categorized as media companies. However, Caplan and co-writer Philip M. Napoli assert that this argument has led to a misclassification of these companies and such misclassification has profound policy implications.

A common position amongst online content providers/aggregators is their resistance to being characterized as media companies. Companies such as Google, Facebook, BuzzFeed, and Twitter have argued that it is inaccurate to think of them as media companies. Rather, they argue that they should be thought of as technology companies. The logic of this position, and its implications for communications policy, have yet to be thoroughly analyzed. However, such an analysis is increasingly necessary as the dynamics of news and information production, dissemination, and consumption continue to evolve. This paper will explore and critique the logic and motivations behind the position that these content providers/aggregators are technology companies rather than media companies, as well as the communications policy implications associated with accepting or rejecting this position.


In this primer, D&S researcher Monica Bulger defines the boundaries of “personalized learning,” explores the needs that various personalized learning systems aim to meet, and highlights the tensions between what is being promised with personalized learning and the practical realities of implementation. She also raises areas of concern, questions about unintended consequences, and potential risks that may come with the widespread adoption of personalized learning systems and platforms.

 


D&S fellow Mark Latonero considers recent attempts by policymakers, big tech companies, and advocates to address the deepening refugee and migrant crisis and, in particular, the educational needs of displaced children through technology and app development projects. He cautions developers and policymakers to consider the risks of failing to understand the unique challenges facing refugee children living without running water, let alone a good mobile network.

The reality is that no learning app or technology will improve education by itself. It’s also questionable whether mobile apps used with minimal adult supervision can improve a refugee child’s well-being. A roundtable at the Brookings Center for Universal Education noted that “children have needs that cannot be addressed where there is little or no human interaction. A teacher is more likely to note psychosocial needs and to support children’s recovery, or to refer children to other services when they are in greater contact with children.” Carleen Maitland, a technology and policy professor who led the Penn State team, found through her experience at Zaatari that in-person interactions with instructors and staff in the camp’s many community centers could provide far greater learning opportunities for young people than sitting alone with a mobile app.

In fact, unleashing ed tech vendors or Western technologists to solve development issues without the appropriate cultural awareness could do more harm than good. Children could come to depend on technologies that are abandoned by developers once the attention and funding have waned. Plus, the business models that sustain apps through advertising, or collecting and selling consumer data, are unethical where refugees are concerned. Ensuring data privacy and security for refugee children using apps should be a top priority for any software developer.

In cases where no in-person education is available, apps can still play a role, particularly for children who feel unsafe to travel outside their shelters or are immobile owing to injuries or disabilities. But if an app is to stand a chance of making a real difference, it needs to arise not out of a tech meet-up in New York City but on a field research trip to a refugee camp, where it will be easier to see how mobile phones are actually accessed and used. Researchers need to ask basic questions about the value of education for refugees: Is the goal to inspire learning on traditional subjects? Empower students with academic credentials or job skills? Assimilate refugees into their host country? Provide a protected space where children can be fed and feel safe? Or combat violent extremism at an early age?

To decide, researchers need to put the specific needs of refugee children first—whether economic, psychosocial, emotional, or physical—and work backward to see whether technology can help, if at all.


primer | 05.13.16

Mediation, Automation, Power

Robyn Caplan, danah boyd

A contemporary issues primer occasioned by Data & Society’s Who Controls the Public Sphere in an Era of Algorithms? workshop.

In this primer, D&S research analyst Robyn Caplan and D&S Founder danah boyd articulate emerging concerns and tensions coming to the fore as platforms like Google, Facebook, Twitter, and Weibo have overtaken traditional media forms, becoming the main way that news and information of cultural, economic, social and political significance is being produced, and disseminated. As social media and messaging apps enable the sharing of news information and serve as sites for public discussion and discourse about cultural and political events, the mechanisms and processes underlying this networked infrastructure, particularly big data, algorithms, and the companies controlling these information flows, are having a profound affect on the structure and formation of public and political life.

The authors raise and explore six concerns about the role of algorithms in shaping the public sphere:

  • Algorithms can be used to affect election outcomes and can be biased in favor of political parties.
  • Algorithms are editors that actively shape what content is made visible, but are not treated as such.
  • Algorithms can be used by states to achieve domestic and foreign policy aims.
  • Automation and bots are being used by state and non-state actors to game algorithms and sway public opinion.
  • The journalism industry and the role of the “fourth estate” have been affected by the logic of algorithms, and content is no longer serving reflexive, democratic aims.
  • Algorithms are being designed without consideration of how user feedback inserts biases into the system.

The authors also grapple with five different classes of tensions underpinning these various concerns and raise serious questions about what ideal we should be seeking:

  • Universality, Diversity, Personalization
  • A Change in Gatekeepers?
  • A Collapse/Re-emergence of Boundaries and Borders
  • Power and Accountability
  • Visibility, Accessibility, and Analysis

Finally, six proposed remedies and solutions to algorithmic shaping of the public sphere are considered and problematized. With each potential solution, the competing value systems and interests that feed into the design of technologies is highlighted:

  • Proactive Transparency
  • Reverse Engineering, Technical and Investigative Mechanisms
  • Design/Engineering Solutions
  • Computational/Algorithmic Literacy
  • Governance and Public Interest Frameworks
  • Decentralization in Markets and Technology

All systems of power are manipulated and there is little doubt that public spheres constructed through network technologies and algorithms can be manipulated, both by the architects of those systems and by those who find techniques to shape information flows. Yet, it is important to acknowledge that previous genres of media have been manipulated and that access to the public sphere has never been universal or even. As we seek to redress concerns raised by technical systems and work towards a more ideal form, it is essential to recognize the biases and assumptions that underpin any ideal and critically interrogate who benefits and who does not. No intervention is without externalities.

These varying tensions raise significant questions about who controls – and should control – the public sphere in an era of algorithms, but seeking solutions to existing concerns requires unpacking what values, peoples, and voices should have power.


Researchers Alexandra Mateescu and Alex Rosenblat published a paper with D&S Founder danah boyd examine police-worn body cameras and their potential to provide avenues for police accountability and foster improved policy-community relations. The authors raise concerns about potential harmful consequences of constant surveillance that has sparked concerns from civil rights groups about how body-worn cameras may violate privacy and exacerbate existing police practices that have historically victimized people of color and vulnerable populations. They consider whether one can demand greater accountability without increased surveillance at the same time and suggest that “the trajectory laid out by body-worn cameras towards greater surveillance is clear, if not fully realized, while the path towards accountability has not yet been adequately defined, let alone forged.”

The intimacy of body-worn cameras’ presence—which potentially enables the recording of even mundane interpersonal interactions with citizens—can be exploited with the application of technologies like facial recognition; this can exacerbate existing practices that have historically victimized people of color and vulnerable populations. Not only do such technologies increase surveillance, but they also conflate the act of surveilling citizens with the mechanisms by which police conduct is evaluated. Although police accountability is the goal, the camera’s view is pointed outward and away from its wearer, and audio recording captures any sounds within range. As a result, it becomes increasingly difficult to ask whether one can demand greater accountability without increased surveillance at the same time.

Crafting better policies on body-worn camera use has been one of the primary avenues for balancing the right of public access with the need to protect against this technology’s invasive aspects. However, no universal policies or norms have been established, even on simple issues such as whether officers should notify citizens that they are being recorded. What is known is that body-worn cameras present definite and identifiable risks to privacy. By contrast, visions of accountability have remained ill-defined, and the role to be played by body-worn cameras cannot be easily separated from the wider institutional and cultural shifts necessary for enacting lasting reforms in policing. Both the privacy risks and the potential for effecting accountability are contingent upon an ongoing process of negotiation, shaped by beliefs and assumptions rather than empirical evidence.


In this Working Paper from We Robot 2016, D&S Researcher Madeleine Elish employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems.

Abstract:

A prevailing rhetoric in human-robot interaction is that automated systems will help humans do their jobs better. Robots will not replace humans, but rather work alongside and supplement human work. Even when most of a system will be automated, the concept of keeping a “human in the loop” assures that human judgment will always be able to trump automation. This rhetoric emphasizes fluid cooperation and shared control. In practice, the dynamics of shared control between human and robot are more complicated, especially with respect to issues of accountability.

As control has become distributed across multiple actors, our social and legal conceptions of responsibility remain generally about an individual. If there’s an accident, we intuitively — and our laws, in practice — want someone to take the blame. The result of this ambiguity is that humans may emerge as “moral crumple zones.” Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.

This paper employs the concept of “moral crumple zones” within human-machine systems as a lens through which to think about the limitations of current frameworks for accountability in human-machine or robot systems. The paper examines two historical cases of “moral crumple zones” in the fields of aviation and nuclear energy and articulates the dimensions of distributed control at stake while also mapping the degree to which this control of and responsibility for an action are proportionate. The argument suggests that an analysis of the dimensions of accountability in automated and robotic systems must contend with how and why accountability may be misapplied and how structural conditions enable this misunderstanding. How do non-human actors in a system effectively deflect accountability onto other human actors? And how might future models of robotic accountability require this deflection to be controlled? At stake is the potential ultimately to protect against new forms of consumer and worker harm.

This paper presents the concept of the “moral crumple zone” as both a challenge to and an opportunity for the design and regulation of human-robot systems. By articulating mismatches between control and responsibility, we argue for an updated framework of accountability in human-robot systems, one that can contend with the complicated dimensions of cooperation between human and robot.

 


If we measure infrastructure in terms of ROI, of course it doesn’t make sense to build out fiber to the home in Point Arena. By that measure, it also doesn’t really make sense to build bridges. Or roads. Or aqueducts. Public goods tend to have pretty rotten ROI. And today in the United States, the Internet increasingly acts as a stand-in or scaffolding upon which social and civic institutions are expected to operate, placing public services on the backbone of privately held platforms.
Without an equivalent to the Rural Electrification Act for broadband, it’s not clear how that scaffolding won’t collapse in on itself.
On a trip across the country, D&S artist in residence Ingrid Burrington stops in Point Arena, California. This small town is located next to the Manchester Cable Station where “the internet rises out of the ocean” yet despite proximity to the cable nearly half of the 34,000 households in the area have only marginal or no broadband access. In this article in her series for the Atlantic, Burrington details not only the digital divide that exists in this area but also the efforts of local providers to grant access and the connectivity promised by the internet.
Follow the series at the Atlantic

D&S fellow Noel Hidalgo discusses his work in civic tech past, present, and future in an interview by James Burke who is with the Open State Foundation and the P2P foundation.

To hear more about open government, civic hacking, and projects currently happening in NYC listen to James’ interview with Noel at P2P.


Abstract: This empirical study explores labor in the on-demand economy using the rideshare service Uber as a case study. By conducting sustained monitoring of online driver forums and interviewing Uber drivers, we explore worker experiences within the on-demand economy. We argue that Uber’s digitally and algorithmically mediated system of flexible employment builds new forms of surveillance and control into the experience of using the system, which result in asymmetries around information and power for workers. In Uber’s system, algorithms, CSRs, passengers, semiautomated performance evaluations, and the rating system all act as a combined substitute for direct managerial control over drivers, but distributed responsibility for remote worker management also exacerbates power asymmetries between Uber and its drivers. Our study of the Uber driver experience points to the need for greater attention to the role of platform disintermediation in shaping power relations and communications between employers and workers.


Public Understanding of Science | 04.01.15

Does literacy improve finance?

Martha Poon, Helaine Olen

Abstract: When economists ask questions about basic financial principles, most ordinary people answer incorrectly. Economic experts call this condition “financial illiteracy,” which suggests that poor financial outcomes are due to a personal deficit of reading-related skills. The analogy to reading is compelling because it suggests that we can teach our way out of population-wide financial failure. In this comment, we explain why the idea of literacy appeals to policy makers in the advanced industrial nations. But we also show that the narrow skill set laid out by economists does not satisfy the politically inclusive definition of literacy that literacy studies fought for. We identify several channels through which people engage with ideas about finance and demonstrate that not all forms of literacy will lead people to the educational content prescribed by academic economists. We argue that truly financial literate people can defy the demands of financial theory and financial institutions.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.