featured


filtered by: working paper


working paper | 03.02.18

The Intuitive Appeal of Explainable Machines

Andrew Selbst, Solon Barocas

This paper is a response to calls for explainable machines by Data & Society Postdoctoral Scholar Andrew Selbst and Affiliate Solon Barocas.

“We argue that calls for explainable machines have failed to recognize the connection between intuition and evaluation and the limitations of such an approach. A belief in the value of explanation for justification assumes that if only a model is explained, problems will reveal themselves intuitively. Machine learning, however, can uncover relationships that are both non-intuitive and legitimate, frustrating this mode of normative assessment. If justification requires understanding why the model’s rules are what they are, we should seek explanations of the process behind a model’s development and use, not just explanations of the model itself.”


paper | 02.02.17

The Legacy of inBloom

Monica Bulger, Patrick McCormick, Mikaela Pitcan

D&S researcher Monica Bulger, with Patrick McCormick and D&S research analyst Mikaela Pitcan, writes this working paper detailing the “Legacy of inBloom”.

Although inBloom closed in 2014, it ignited a public discussion of student data privacy that resulted in the introduction of over 400 pieces of state-level legislation. The fervor over inBloom showed that policies and procedures were not yet where they needed to be for schools to engage in data-informed instruction. Industry members responded with a student data privacy pledge that detailed responsible practice. A strengthened awareness of the need for transparent data practices among nearly all of the involved actors is one of inBloom’s most obvious legacies.

Instead of a large-scale, open source platform that was a multi-state collaboration, the trend in data-driven educational   technologies since inBloom’s closure has been toward closed, proprietary systems, adopted piecemeal. To date, no large-scale educational technology initiative has succeeded in American K-12  schools. This study explores several factors that contributed to the demise of inBloom and a number of important questions: What were the values and plans that drove inBloom to be designed the way it was? What were the concerns and movements that caused inBloom to run into resistance How has the entire inBloom development impacted the future of edtech and student data?


D&S fellow Diana Freed co-authored a health technology piece that introduces an approach called YADL (Your Activities of Daily Living), which ‘uses images of ADLs and personalization to improve survey efficiency and the patient-experience’.

It offers several potential benefits: wider coverage
of ADLs, improved engagement, and accurate capture of individual
health situations. In this paper, we discuss our system design
and the wide applicability of the design process for survey tools
in healthcare and beyond. Interactions with with a small number
of patients with Arthritis throughout the design process have been
promising and we share detailed insights.


In this primer, D&S researcher Claire Fontaine examines the construct of accountability as it functions in discussions around education reform in the American public education system. The paper considers the historic precursors to accountability, as well as the set of political, economic, cultural, and social conditions that led to test scores becoming the main measure of a school’s success.

In addition to the historical context around accountability, the paper considers important questions about who accountability serves, what the incentive structures are, and how accountability is gamed and resisted. In short, accountability of what, to whom, for what ends, at what cost?

Abstract:

There is an ongoing tension in the American public education system between the values of excellence, equity, and a sustained commitment to efficiency. Accountability has emerged as a framework in education reform that promises to promote and balance all three values. Yet, this frame is often contested due to disagreements over the role of incentives and penalties in achieving desirable change, and concerns that the proposed mechanisms will have significant unintended consequences that outweigh potential benefits. More fundamentally, there is widespread disagreement over how to quantify excellence and equity, if it is even possible to do so. Accountability rhetoric echoes a broader turn toward data-driven decision-making and resource allocation across sectors. As a tool of power, accountability processes shift authority and control away from professional educators and toward policymakers, bureaucrats, and test makers.

The construct of accountability is predicated on several assumptions. First, it privileges quantification and statistical analysis as ways of knowing and is built on a long history of standardized testing and data collection. Second, it takes learning to be both measurable and the product of instruction, an empiricist perspective descended from John Locke and the doctrine that knowledge is derived primarily from experience. Third, it holds that schools, rather than families, neighborhoods, communities, or society at large, are fundamentally responsible for student performance. This premise lacks a solid evidentiary basis and is closely related to the ideology of meritocracy. Finally, efforts to achieve accountability presume that market-based solutions can effectively protect the interests of society’s most vulnerable, another controversial assumption.

The accountability movement reflects the application of free market economics to public education, a legacy of the Chicago School of Economics in the post-World War II era. As a set of policies it was instantiated in the Elementary and Secondary Education Act (ESEA) of 1965, reauthorized as the No Child Left Behind Act (NCLB) of 2002, and reinforced by the Every Student Succeeds Act (ESSA) of 2015. Teaching and learning are increasingly measured and quantified to enable analysis of the relationship between inputs (e.g., funding) and outputs (e.g., student performance).

As has been true in other sectors when data-driven surveillance and assessment practices are introduced, outcomes are not always as expected. It is unclear whether this data push will promote equality of opportunity, merely document inequality, or perhaps even increase racial and socioeconomic segregation. Furthermore, little is understood about the costs of increased assessment on the health and success of students and teachers, externalities that are rarely measured or considered in the march to accountability. States will need to generate stakeholder buy-in and think carefully about the metrics they include in their accountability formulas in order to balance mandates for accountability, the benefits that accrue to students from preserving teacher autonomy and professionalism, the social good of equal opportunity, and public calls for transparency and innovation.


D&S Fellow Mark Latonero produced a working paper presenting research done in collaboration with Sheila Murphy, Patricia Riley, and Prawit Thainiyom at the University of Southern California under USAID’s C-TIP Campus Challenge Research Grant initiative. The research used a public opinion survey to assess how an MTV Exit documentary changed knowledge, attitudes, and behaviors related to trafficking vulnerability among the target population in Indonesia. The research also included a social media analysis to assess how activists in Indonesia frame discussions around human trafficking. The paper was produced under the Democracy Fellows and Grants (DFG) program, which is funded through USAID’s Center of Excellence in Democracy, Human Rights, and Governance (DRG Center) and managed by the Institute for International Education.

The researchers sought to generate data to inform the design of programs to raise awareness about trafficking among vulnerable populations and influence knowledge, attitudes, and practices related to trafficking. This paper focuses on research conducted in Indonesia by a team led by the University of Southern California (USC).

The USC team’s research in Indonesia included two components: a public opinion survey and an analysis of social media, both implemented in 2014. The public opinion survey was administered in Indramayu, West Java, Indonesia, a district with more than 1.77 million people that is a “hot spot” for human trafficking. USC administered the survey twice, with 527 participants; between the first and second wave, 319 of the participants watched an MTV Exit documentary on Indonesians’ experiences with human trafficking. USC conducted the social media assessment from May to July 2014, searching Twitter, Facebook, and YouTube in Indonesia for posts containing one or more of seven key words that would indicate that a post was about human trafficking. Key findings include:

  • The MTV Exit documentary on Indonesians’ experience with human trafficking had limited effects on increasing the viewers’ knowledge of trafficking, awareness of vulnerability to trafficking, or intention to reduce vulnerability—suggesting that awareness-raising materials should be pretested to ensure that messages are compelling for and relevant to the target community.
  • Face-to-face engagement and discussion were the most effective ways to decrease misconceptions about human trafficking, trafficking vulnerability, and effective risk reduction.
  • Although there clearly were social media conversations about and activism around human trafficking in Indonesia, USC found no evidence of activists using Twitter to organize or augment a strategic advocacy campaign, so it may be useful to consider how activists in other countries in the region have used social media to disseminate information or build and leverage networks for collective action and social change.

D&S fellow Anthony Townsend coauthored an NYU Marron Institute working paper. Abstract:

The 21st century is being shaped by two global trends: the near-total urbanization of the world’s population, and the seamless integration of digital information technology throughout the built and manufactured environment. In this third phase of the diffusion of computing, following the mainframe (one computer, many users), and the personal computer (one computer, one user), the dominant model is ubiquitous computing, “in which individuals are surrounded by many networked, spontaneously yet tightly cooperating computers” (Wieser, 1991; Muhlhauser & Gurevych, 2008).

A diverse array of interests are deploying these technologies at an accelerating pace, and a handful of global cities find themselves at the forefront of the convergence of urbanization and computational ubiquity. This working paper investigates a key strategy these cities have developed through the creation of what we call “digital master plans”. These plans are attempts to mobilize local stakeholders around visions, goals, and road maps to adapt to these external technological and economic pressures, within local social, economic and political constraints.

We surveyed plans from eight cities – New York, Chicago, London, Barcelona, Singapore, Hong Kong, Dublin, and San Francisco, identifying the scope of content addressed in the plans, the process used to develop the plans, and the overall approach to implementation chosen. We find that while there is little convergence of methodology, the plans share a common set of goals: the amplification of existing investments in infrastructure, government services, and economic development through sustained, incremental innovation in digital technology. We identify four strategic approaches for action for cities considering digital master planning: facilitative, learning, systems and interventionist.


working paper | 07.16.15

Certifying and removing disparate impact

Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian

D&S fellow Sorelle Friedler and her research colleagues investigate the ways that algorithms make decisions in all aspects of our lives and whether or not we can determine if these algorithms are biased, involve illegal discrimination, or are unfair? In this paper, they introduce and address two problems with the goals of quantifying and then removing disparate impact.

Abstract: What does it mean for an algorithm to be biased? In U.S. law, unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. This legal determination hinges on a definition of a protected class (ethnicity, gender, religious practice) and an explicit description of the process.
When the process is implemented using computers, determining disparate impact (and hence bias) is harder. It might not be possible to disclose the process. In addition, even if the process is open, it might be hard to elucidate in a legal setting how the algorithm makes its decisions. Instead of requiring access to the algorithm, we propose making inferences based on the data the algorithm uses.
We make four contributions to this problem. First, we link the legal notion of disparate impact to a measure of classification accuracy that while known, has received relatively little attention. Second, we propose a test for disparate impact based on analyzing the information leakage of the protected class from the other data attributes. Third, we describe methods by which data might be made unbiased. Finally, we present empirical evidence supporting the effectiveness of our test for disparate impact and our approach for both masking bias and preserving relevant information in the data. Interestingly, our approach resembles some actual selection practices that have recently received legal scrutiny.


working paper | 05.14.14

Networked Rights and Networked Harms

Karen Levy, danah boyd

(Conference draft). “Networked Rights and Networked Harms.” Presented at Privacy Law School Conference (June 6, 2014) and Data & Discrimination (May 14, 2014).

The goal of this paper (far from a finished product, filled with gaps in logic and argumentation) is to imagine and interrogate two interwoven concepts of “networked rights” and “networked harms,” to bridge conversations in law, policy, and social science.

To learn more or read the draft, contact Karen and danah.


Subscribe to the Data & Society newsletter

Support us

Donate
Data & Society Research Institute 36 West 20th Street, 11th Floor
New York, NY 10011, Tel: 646.832.2038

Reporters and media:
[email protected]

General inquiries:
[email protected]

Unless otherwise noted this site and its contents are licensed under a Creative Commons Attribution 3.0 Unported license.