In e-flux, Data & Society INFRA Lead Ingrid Burrington contemplates the maps of the internet.
“The historical maps made of the internet—and, later, the maps of the world made by the internet—are both reflection and instrument of the ideologies and entanglements of the networked world. They are one way we might navigate the premise of the networked citizen and her obligations to her fellow travelers in the networked landscape.”
Slate | 03.02.18
Data & Society Researcher Alex Rosenblat unveils the impact of Uber’s new driving limit policy.
“These moves from Uber and Lyft seem to align with their gig-economy model of employment, which structures work as an individual pursuit and individual liability. But even this sell is misleading. While, for many drivers, the idea of being independent at work is very appealing, their ability to make entrepreneurial decisions is consistently constrained by the ride-hail apps’ nudges and other algorithmic management, rules, external costs, and wage cuts.”
Data & Society INFRA Lead Ingrid Burrington reflects on her visit to Spaceport America.
“It’s a quintessential American desert trope: the future as rehearsal rather than reality. Many promises for technologies of future urbanism start as desert prototypes.”
In the gig-economy, management by algorithms means employment relationships grow more remote and distributed across the network. Alex Rosenblat explains how workers navigate this by creating their own forums.
“Online forums aren’t just helping drivers like Cole navigate the challenges of their work, and helping those of us who use and study these platforms grasp those challenges too. They show how as employment relationships grow more remote and distributed across the network, workers can adapt, using technology to forge their own workplace culture.”
Wired | 11.29.17
On Wednesday Nov. 29th, the Supreme Court heard Carpenter vs. U.S., a 4th amendment case on cell data access. Postdoctoral scholars Julia Ticona & Andrew Selbst urged the court to understand that cell phones aren’t voluntary in this day and age.
“The justices will surely understand that without any alternatives for accessing online services, vulnerable (and over-policed) populations will be unable to make meaningful choices to protect their privacy, amplifying the disadvantages they already face.”
Harvard Business Review | 05.16.17
D&S researcher Mark Latonero provides an overview of the role of large tech companies in refugee crises.
While the 40-page brief is filled with arguments in support of immigration, it hardly speaks about refugees, except to note that those seeking protection should be welcomed. Any multinational company with a diverse workforce would be concerned about limits to international hiring and employee travel. But tech companies should also be concerned about the refugee populations that depend on their digital services for safety and survival.
Harvard Business Review | 04.19.17
Jongbin Jung, Connor Concannon, D&S fellow Ravi Shroff, Sharad Goel, and Daniel G. Goldstein explore new methods for machine learning in criminal justice.
Simple rules certainly have their advantages, but one might reasonably wonder whether favoring simplicity means sacrificing performance. In many cases the answer, surprisingly, is no. We compared our simple rules to complex machine learning algorithms. In the case of judicial decisions, the risk chart above performed nearly identically to the best statistical risk assessment techniques. Replicating our analysis in 22 varied domains, we found that this phenomenon holds: Simple, transparent decision rules often perform on par with complex, opaque machine learning methods.
D&S artist-in-residence Ingrid Burrington explores the importance of domain names at NamesCon, an annual conference for the domain-names industry.
In addition to being crucial to making the web work, domain names are also a highly political pocket of the web, particularly shaped by the legacy of colonialism. Most of the underlying protocols that make the internet work—including DNS—are encoded in ASCII, which translates bits into letterforms, numbers, and punctuation marks. But ASCII’s letterforms only represent the Latin alphabet, limiting expression in domain names to Western languages (while arguing that a character encoding is an instrument of imperialism sounds bold, so does assuming that “text” is synonymous only with “English”).
D&S fellow Zara Rahman writes about how immigrant families use social media and digital technologies.
The consequence is that the home of our deeply personal information has gone from treasured letters stored in a box at our houses, to servers owned by corporate companies that we’ll never see. Those personal notes, the ways of showing our family that we’re happy and content in our new lives, despite what we’ve lost — they live online now. The more you share with that corporation, the stronger those family ties get. There is a third party in these relationships.
D&S fellow Zara Rahman details the year’s ‘data-driven confusion’ and contends for a responsible data approach, both to practice and comprehension.
We must take a responsible data approach to advocacy – address gaps in literacy proactively, be rigorous in our methods, and maintain credibility, especially on important issues. Nowadays, thanks to the speed and amplification of sources afforded to us via technology, analyses and “facts” will spread faster than before. Understanding the critical limitations of data and information is going to become ever more important in years to come.
D&S researcher Alex Rosenblat writes about the motivations of gig economy workers.
In sum, the effects of the gig economy on the workforce are mixed. These platforms seem to benefit people earning supplementary income or those lacking other job opportunities the most, while they impose the most risk on full-time earners. And Uber and Lyft are still facing legal challenges in the U.S. for classifying drivers as independent contractors, as opposed to employees who can receive benefits. (In the U.K., an employment tribunal recently ruled that two Uber drivers must receive employee benefits, like the national living wage. Uber plans to appeal that ruling.)
Harvard Business Review | 08.31.16
D&S affiliate Solon Barocas and D&S fellow Karen Levy examine a concept called refractive surveillance, which is when surveillance of one group impacts another.
Debates about consumer privacy have largely missed the fact that firms’ ability to develop a better understanding of consumers also impacts workers’ day-to-day experiences, their job security, and their financial well-being.
But our research suggests that data collection frequently also impacts people other than the those being surveilled. We call this dynamic refractive surveillance. In other words, collecting information about one group can facilitate control over an entirely different group. In our ongoing study, we investigate this dynamic in the context of retail tracking, to understand how data collection about customers can impact how retail workers are managed.
D&S researcher Josh Scannell wrote an extensive analysis of predictive policing algorithms, showing that, while they were not built to be racist, they mirror a racist system.
Northpointe’s algorithms will always be racist, not because their engineers may be bad but because these systems accurately reflect the logic and mechanics of the carceral state — mechanics that have been digitized and sped up by the widespread implementation of systems like CompStat.
EDUCAUSE review | 06.27.16
ProPublica | 05.23.16
D&S fellow Surya Mattu investigated bias in risk assessments, algorithmically generated scores predicting the likelihood of a person committing a future crime. These scores are increasingly used in courtrooms across America to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts to fundamental decisions about a defendant’s freedom:
We obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.
The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.
When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.
We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.
- The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
- White defendants were mislabeled as low risk more often than black defendants.
Could this disparity be explained by defendants’ prior crimes or the type of crimes they were arrested for? No. We ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants’ age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.
D&S fellow Mark Latonero considers recent attempts by policymakers, big tech companies, and advocates to address the deepening refugee and migrant crisis and, in particular, the educational needs of displaced children through technology and app development projects. He cautions developers and policymakers to consider the risks of failing to understand the unique challenges facing refugee children living without running water, let alone a good mobile network.
The reality is that no learning app or technology will improve education by itself. It’s also questionable whether mobile apps used with minimal adult supervision can improve a refugee child’s well-being. A roundtable at the Brookings Center for Universal Education noted that “children have needs that cannot be addressed where there is little or no human interaction. A teacher is more likely to note psychosocial needs and to support children’s recovery, or to refer children to other services when they are in greater contact with children.” Carleen Maitland, a technology and policy professor who led the Penn State team, found through her experience at Zaatari that in-person interactions with instructors and staff in the camp’s many community centers could provide far greater learning opportunities for young people than sitting alone with a mobile app.
In fact, unleashing ed tech vendors or Western technologists to solve development issues without the appropriate cultural awareness could do more harm than good. Children could come to depend on technologies that are abandoned by developers once the attention and funding have waned. Plus, the business models that sustain apps through advertising, or collecting and selling consumer data, are unethical where refugees are concerned. Ensuring data privacy and security for refugee children using apps should be a top priority for any software developer.
In cases where no in-person education is available, apps can still play a role, particularly for children who feel unsafe to travel outside their shelters or are immobile owing to injuries or disabilities. But if an app is to stand a chance of making a real difference, it needs to arise not out of a tech meet-up in New York City but on a field research trip to a refugee camp, where it will be easier to see how mobile phones are actually accessed and used. Researchers need to ask basic questions about the value of education for refugees: Is the goal to inspire learning on traditional subjects? Empower students with academic credentials or job skills? Assimilate refugees into their host country? Provide a protected space where children can be fed and feel safe? Or combat violent extremism at an early age?
To decide, researchers need to put the specific needs of refugee children first—whether economic, psychosocial, emotional, or physical—and work backward to see whether technology can help, if at all.
D&S Advisor Ethan Zuckerman pushes back against a new myth developing around Bitcoin as a ready-made solution to complex humanitarian and international development problems around the globe:
Is Bitcoin really the best way to think about establishing a digital commons for financial transactions? Maybe not. The Bitcoin network requires large amounts of bandwidth to run and uses enormous amounts of power, which makes it challenging for people in the developing world to participate in mining or use the network reliably.
The challenges of financial inclusion in a place like Kenya are diverse, from the cost of sending and receiving money, to the difficulty in doing business across borders, and the concentrated power of Safaricom. Perhaps Bitcoin could spur financial innovation there. But there are no guarantees. Understanding what Bitcoin can do for people in the developing world will first require a better understanding of the people who live there.
Harvard Business Review | 04.06.16
D&S Researcher Alex Rosenblat examines how Uber’s app design and deployment redistributes management functions to semiautomated and algorithmic systems, as well as to consumer ratings systems, creating ambiguity around who is in charge and what is expected of workers. Alex also raises questions about Uber’s neutral branding as an intermediary between supply (drivers) and demand (passengers) and considers the employment structures and hierarchies that emerge through its software platform:
Most conversations about the future of work and automation focus on issues of worker displacement. We’re only starting to think about the labor implications in the design of platforms that automate management and coordination of workers. Tools like the rating system, performance targets and policies, algorithmic surge pricing, and insistent messaging and behavioral nudges are part of the “choice architecture” of Uber’s system: it can steer drivers to work at particular places and at particular times while maintaining that its system merely reflects demand to drivers. These automated and algorithmic management tools complicate claims that drivers are independent workers whose employment opportunities are made possible through a neutral, intermediary software platform.
In many ways, automation can obscure the role of management, but as our research illustrates, algorithmic management cannot be conflated with worker autonomy. Uber’s model clearly raises new challenges for companies that aim to produce scalable, standardized services for consumers through the automation of worker-employer relationships.
Slate | 03.08.16
D&S Researcher Tim Hwang and Samuel Woolley consider the larger trend toward automated politics and the likely future sophistication of automated politics and potential impacts on the public sphere in the era of social media.
Political bots are challenging in part because they are dual-use. Even though many of the bot deployments we see are designed to manipulate social media and suppress discourse, bots aren’t inherently corrosive to the public sphere. There are numerous examples of bots deployed by media organizations, artists, and cultural commentators oriented toward raising awareness and autonomously “radiating” relevant news to the public. For instance, @stopandfrisk tweets information on the every instance of stop-and-frisk in New York City in order to highlight the embattled policy. On the other hand, @staywokebot sends messages related to the Black Lives Matter movement.
This is true of bots in general, even when they aren’t involved in politics. Intelligent systems can be used for all sorts of beneficial things—they can conserve energy and can even save lives—but they can also be used to waste resources and forfeit free speech. Ultimately, the real challenge doesn’t lie in some inherent quality of the technology but the incentives that encourage certain beneficial or harmful uses.
The upshot of this is that we should not simply block or allow all bots—the act of automation alone poses no threat to open discourse online. Instead, the challenge is to design a regime that encourages positive uses while effectively hindering negative uses.
D&S Researcher Madeleine Clare Elish considers the possibility of a full-on replacement of humans by robots. She argues that this scenario is nowhere near as close as we have been led to believe. Though algorithms can do an astounding range of things that were once viewed as exclusively human work, they don’t work all by themselves.
This is a crucial but often overlooked point in the debate around algorithms and the future of work: Most human jobs will not be replaced but rather reconfigured in the near future. We absolutely need to worry about the long-term implications on the demand for human labor and how this will affect the economy. But if we only focus on the question of whether and when humans will be replaced, we miss the impact algorithms are already having on work and the opportunities to make choices, as designers and consumers, about how algorithms can disrupt or enforce existing power dynamics in the future.
The Atlantic | 01.08.16
D&S artist in residence Ingrid Burrington explores how the FCC’s net neturality rules are applied differently to mobile carriers than to wired broadband carriers and the effects on how people experience and perceive the mobile Internet vs. the Internet accessed on a laptop or desktop computer.
It really seems too obviously out of line to be true—mobile carriers are literally partnering with large media companies to subsidize data-devouring streaming services, while what might be considered the “open Internet” remains a paid service (and, considering the amount of data consumed by advertising alone, a lot of users are paying to view stuff they actually don’t care about at all).
D&S Artist in Residence Ingrid Burrington contemplates network infrastructure, underlining the fact that today’s infrastructure can’t last much longer under the strain of exponentially expanding connectivity demands. She suggests that the tech industry will soon have to face long-term questions concerning time, maintenance, and scale.
The impact of data centers—really, of computation in general—isn’t something that really galvanizes the public, partly because that impact typically happens at a remove from everyday life. The average amount of power to charge a phone or a laptop is negligible, but the amount of power required to stream a video or use an app on either device invokes services from data centers distributed across the globe, each of which uses energy to perform various processes that travel through the network to the device. One study(weirdly enough, sponsored by the American Coal Association, its purpose to enthuse about how great coal is for technology) estimated that a smartphone streaming an hour of video on a weekly basis uses more power annually than a new refrigerator.
D&S artist in residence Ingrid Burrington shares impressions from a tour of Facebook’s massive Altoona data center, and wonders about the extent to which Facebook might be creating an infrastructure to rival the internet itself.
The entrance to the server room where all of this hardware lives is behind both an ID-card reader and a fingerprint scanner. The doors open dramatically, and they close dramatically. It is only one of several server rooms at Altoona, but just this one room also seems endless. It is exactly the glimmering-LED cyberpunk server-porn dreamscape that it is supposed to be.
SSIR | 11.20.15
All around the world, media outlets are learning that some funders are uncomfortable with supporting journalism merely as a “public good.” They want to see proof of impact.
In this article, D&S advisor Ethan Zuckerman and Anya Schiffrin address emerging metrics used by media outlets to assess the impact that these news organizations have on the world. Traditional metrics used by media companies are insufficient for media companies who are supported by philanthropic organizations rather than those supported solely by advertising revenue- yet the alternative metrics are still being tested. Zuckerman and Schiffrin analyze an assortment of these approaches revealing the difficulty in measuring media impact.
If we measure infrastructure in terms of ROI, of course it doesn’t make sense to build out fiber to the home in Point Arena. By that measure, it also doesn’t really make sense to build bridges. Or roads. Or aqueducts. Public goods tend to have pretty rotten ROI. And today in the United States, the Internet increasingly acts as a stand-in or scaffolding upon which social and civic institutions are expected to operate, placing public services on the backbone of privately held platforms. Without an equivalent to the Rural Electrification Act for broadband, it’s not clear how that scaffolding won’t collapse in on itself.
Quartz | 11.12.15
The gig economy, the platform economy, the networked economy, the sharing economy, the on-demand economy, the peer economy, the bottom-up economy… You’ve probably heard these and maybe other terms bandied about, often interchangeably, to describe how companies like Uber, Airbnb, Taskrabbit, and countless others operate.
Data & Society affiliate and senior editor at Quartz, Gideon Lichfield goes through an unpacks each of these terms and explains why they’re insufficient and not interchangeable. Lichfield argues that while there is certainly a shift happening in the future of work but these phrases do not adequately capture it.
Pacific Standard | 11.10.15
Growth in the technology sector has popularized a dangerously narrow conception of innovation. For a richer view of innovation look beyond the glistening headquarters of technology companies to dusty construction sites.
In this article D&S advisor Gina Neff explores the dissonance between the technical data-driven solutions and the employees that they are foisted upon. Neff argues that the design of these systems does not allow workers to innovate with the data and thus limits the places ‘where we look for good ideas.’
In this article D&S fellow, Seeta Peña Gangadharan takes a closer look at the computerization of library data, noting the heavy dependence of libraries upon third-party systems that mediate the access, storage, and sharing of information between libraries and patrons. This consumer-centered role libraries raises questions regarding survival and the ability of libraries to preserve the trust that patrons have in the institution.
The library’s future? It’s not about books. Regardless of which medium prevails, the library’s path forward depends on the integrity and inclusiveness of data flows it manages and mediates.
Fusion | 11.06.15
“In order to best understand the new technologies in our lives, it may be more useful to look to stage magicians than to source code.”
D&S researcher Tim Hwang discusses recent technological deceptions carried out by devices and services that we trust as providers but that may behave more like magicians. Hwang offers insights on how can we make sure that technology provides entertainment like we expect from David Blaine rather than con us like a Three Card Monte operator.
“It turns out driving directly toward huge, looming storm clouds is a great rhetorical device to employ on a road trip to see cloud infrastructure—and also a great way to be faced with the cruel truth of your own mortality.”
D&S artist in residence Ingrid Burrington gives an overview of network infrastructure by giving clarity to misleading metaphors and marketing jargon in her second piece in a series for the Atlantic. If you’ve ever wondered what “platform as a service” actually means or wanted to shake your fist at “the cloud” head over to the Atlantic and read Burrington’s series.
“Starting a cross-country drive to New York in Los Angeles is pretty inconvenient, unless your cross-country drive is also a vision quest to see the Internet.”
Former fellow and current artist in residence at D&S, Ingrid Burrington offers insight into the humble beginnings of “cloud” and complicates the popular usage of the term in her series appearing in The Atlantic.
“Just as the software and hardware of the internet has been militarized by the imperatives of a mostly secret ‘cyberwar,’ so too are online social spaces being weaponized in new and mostly hidden ways.”
D&S researcher Tim Hwang assesses the future of quantitative public relations and manipulation techniques, arguing that these data-intensive, targeted, and subtle modes of influence have been made possible by online advertising.
How will the public be able to protect itself?
Pacific Standard | 09.10.15
D&S researcher Alex Rosenblat published an essay on remote management in Pacific Standard’s The Future of Work and Workers series:
Rather than having managers who listen to them and deliver feedback, drivers are managed through monitoring and rating systems delivered by semi-automated messaging. Uber says it’s just an app, not the drivers’ employer. Yet, this claim belies the significant control Uber exerts over their behavior through electronic management and performance metrics.
“So what brings you to Atlanta?” the man at the Alamo rental-car desk asked my friend Sam. We responded perhaps more eagerly than necessary.
“You know those markings you’ll see on the sidewalk that tell you where a gas main is or the signs that tell people to call before they dig?” Sam began.
“We’re here for an event where the people who make those markings do that competitively, for money,” I added. “We’re here to watch.”
D&S artist in residence Ingrid Burrington tells the story of traveling to Georgia for the 14th Annual International Utility Locate Rodeo and concludes with a meditation on seeing, infrastructure, and maps.
Weekly Wonk | 08.13.15
In this piece for Weekly Wonk, New America’s digital magazine, D&S advisor Charlton McIlwain explores the provenance and prospects of the contemporary civil rights “movement network”:
What’s new about the present movement is that a new generation of activists comprises its front lines. This younger cohort favors distributed leadership over following individual leaders and holds the civil rights establishment at arms length. More importantly, their efforts are distinguished by their ability to master and marshal digital technology to mobilize people and resources. Their pioneering efforts both innovate and stand on the shoulders of decades of organizing in the streets and online and are an early indication of how the movement will evolve in the coming years.
The Atlantic | 08.06.15
D&S fellow Tim Hwang distinguishes between offline boycotts and online refusals to link in the context of the advertising-driven Internet. What moral principles and norms are implicated when individuals choose not to link or click? How do those principles and norms interact with technical aspects of the Web?
D&S fellow Karen Levy published an essay on measurement in Pacific Standard’s The Future of Work and Workers series:
As data analytics and monitoring technologies come to be used in more and more workplaces, we must be attuned to how they affect these most vulnerable workers. Counting some kinds of work to the exclusion of others can mean that the real burdens of work are less visible, and that the workers who bear them may be less fairly paid for all that they do.
“Uber’s access to real-time information about where passengers and drivers are has helped make it one of the most efficient and useful apps produced by Silicon Valley in recent years. But if you open the app assuming you’ll get the same insight, think again: drivers and passengers are only getting part of the picture.”
Using research conducted with Luke Stark, D&S researcher Alex Rosenblat discusses the differently mediated experiences of Uber drivers and passengers and their various strategies for gaming that mediation.
“Increasingly, what underlies the debate over the so-called sharing economy is a nascent, bigger battle about how society wants machines coordinating and governing human activity. These apps don’t match and route people by hand. Instead, software and underlying algorithms make these technologies work. Companies throughout the ‘sharing economy’ — like Postmates, Handy, and TaskRabbit—all depend on the use of machines to match, sort, and assign tasks effectively at massive scale.”
In “The Mirage of the Marketplace: The disingenuous ways Uber hides behind its algorithm,” Tim Hwang and Madeleine Claire Elish delaminate Uber’s engineering of supply and demand in order to raise questions around the role and responsibilities of automation.
Quartz | 07.25.15
“In a self-driving car, the control of the vehicle is shared between the driver and the car’s software. How the software behaves is in turn controlled — designed — by the software engineers. It’s no longer true to say that the driver is in full control… Nor does it feel right to say that the software designers are entirely control.
“Yet as control becomes distributed across multiple actors, our social and legal conceptions of responsibility are still generally about an individual. If there’s a crash, we intuitively — and our laws, in practice — want someone to take the blame.
“The result of this ambiguity is that humans may emerge as ‘liability sponges’ or ‘moral crumple zones.'”
At Data & Society’s Intelligence and Autonomy forum in March 2015, “moral crumple zone” emerged as a useful shared term for the way the “human in the loop” is saddled with liability in the failure of an automated system.
Excerpt: “With all due respect to the boldface AI worriers, do we need to invent a boogeyman from the future when we’ve got the present to worry about? Is tomorrow’s machine enslavement so much more terrifying than today’s vast amounts of child labor, human trafficking, and incarceration? Our current human law enforcement could certainly use some superhuman intelligence to counter the systemic and implicit bias that leads to such disparate levels of arrests, violence, and abuse.”
Vice | 05.27.15
Data & Society’s Intelligence and Autonomy initiative commissioned authors to envision future scenarios for intelligent systems in four domains: medicine, labor, urban design, and warfare.
causes for concern and outrage among civil-liberties advocates around these techniques and tactics.
It’s telling that one of the first articles to promote predictive policing, a 2009 Police Chief Magazine piece by the LAPD’s Charlie Beck and consultant Colleen McCue, poses the question “What Can We Learn From Wal-Mart and Amazon About Fighting Crime in a Recession?” The article likens law enforcement to a logistics dilemma, in which prioritizing where police officers patrol is analogous to identifying the likely demand for Pop-Tarts. Predictive policing has emerged as an answer to police departments’ assertion that they’re being asked to do more with less. If we can’t hire more cops, the logic goes, we need these tools to deploy them more efficiently.
The Atlantic | 05.15.15
Excerpt: “Police-worn body cameras are coming. Support for them comes from stakeholders who often take opposing views. Law enforcement wants them, many politicians are pushing for them, and communities that already have a strong police presence in their neighborhoods are demanding that the police get cameras now. Civil-rights groups are advocating for them. The White House is funding them. The public is in favor of them. The collective — albeit, not universal — sentiment is that body cameras are a necessary and important solution to the rising concerns about fatal encounters between police and black men.
“As researchers who have spent the last few months analyzing what is known about body cams, we understand the reasons for this consensus, but we’re nervous that there will be unexpected and undesirable outcomes. On one hand, we’re worried that these expensive technologies will do little to curb systemic abuse. But what really scares us is the possibility that they may magnify injustice rather than help eradicate it. We support safeguards being put in place. But the cameras are not a proven technology, and we’re worried that too much is hinging on them being a silver bullet to a very serious problem. Our concerns stem from three major issues:
magazine article | 08.15.12
“I just spoke with Yves-Alexandre de Montjoye, a senior PhD student in computational privacy at the MIT Media Lab, and I’ve got some bad news about data and privacy. Then I’ve got worse news. But fear not, because after that I’ve got some good news!”
D&S advisor Baratunde Thurston tells us about a radical proposal that will not only help us consumers but it would also be beneficial to those companies keeping our data. Read at Fast Company to find out more!
“But as history tells us, camera evidence does not an indictment make.”
D&S advisor, Janet Vertesi discusses the difficulty with visual evidence in criminal indictments and the power of visual suggestibility. Offering evidence as to why police worn body cameras may not be the panacea they have recently been portrayed as.
One week D&S affiliate Elana Zeide is described by the new app Crystal as “a quick learner with strong analytical, creative, and social skills, but may seem scatter-brained, forgetful, and/or sarcastic” and the next week as “pragmatic, independent, and need logical reasons for everything—but [am] able to take a calculated risk when necessary.”
While Zeide it isn’t clear why the app changed its opinion, we are shown why it should be taken with caution and the larger implications that these types of tools can have.
magazine article | 04.24.15
“James Bond had one. So did Maxwell Smart and Captain Kirk. Science fiction is littered with examples of heroes and villains barking orders into their wrists or pressing the right combination of tiny buttons to save the day.”
D&S advisor, Janet Vertesi discusses the newly released Apple watch and some of the implications of one of the forces driving its popularity- conspicuous consumption.
magazine article | 04.20.15
“Even if 50,000 people shorten their showers, this is a drop in the proverbial bucket.”
The drought in California is a serious issue that needs to be addressed but as D&S advisor Janet Vertesi explains in this article, we should begin thinking about the solutions beyond the individual.
magazine article | 04.13.15
After his Facebook account was hacked, D&S advisor Baratunde Thurston saw the other side of allowing social media giants to access other websites, apps, and services. After sharing his take-aways from the experience you might reconsider connecting your Twitter account the next time you find yourself wondering, as Thurston puts it: “Exactly why do you need to know my email address and have me upload a profile photo, random app I don’t really care about?”
D&S fellows Karen Levy and Tim Hwang ask after the ethics of design theater.
Excerpt: “A machine’s front stage performance gets enacted through design. Just as a human provides front stage cues through her appearance and behavior (for instance, by talking with a certain degree of formality, or wearing a uniform), design provides signals for how the people around a machine should understand and interact with it. Sometimes these cues are relatively forthright: press this button to start, plug me in here. But just as humans can provide social cues that mislead others about their ‘true’ nature, the design of a system or artifact can invoke deception: a machine, like a person, can lie, omit, or mislead.”
Bots are slippery and weird and not particularly monetizable–which is part of what makes them magic and what maybe puts them at risk.
In this article D&S fellow Ingrid Burrington shares her thoughts on bots, GIFs, and magic. Whether or love or hate twitter bots, you should head over to Source and read Ingrid’s perspective on why she believes bots are internet magic.
“The accelerated age buries technological origin stories beneath endless piles of timestamped data. When people lose sight of these origin stories, they do a disservice to our technologies and to ourselves.” In this essay Data & Society fellow Ingrid Burrington works through the history of and resistance to GPS, and its connection to networked time, in order to argue that, “[i]n the rush of a persistent accelerated now, interruptions and challenges to life in real-time are sometimes necessary in order to ask what kind of future we’re building.”
Excerpt: “What’s more, metaphors matter because they shape laws and policies about data collection and use. As technology advances, law evolves (slowly, and somewhat clumsily) to accommodate new technologies and social norms around them. The most typical way this happens is that judges and regulators think about whether a new, unregulated technology is sufficiently like an existing thing that we already have rules about—and this is where metaphors and comparisons come in.”
Data & Society affiliate Kate Crawford comments on a court case in which a law firm is using data from a plaintiff’s Fitbit in support of her personal injury claim and explores the implications of elective self-tracking technologies for “truth” in legal proceedings.
Quartz | 11.02.14
Self driving cars are no longer in our distant future, they’re here and they’re becoming more independent however, D&S fellow Anthony Townsend and NYU colleague, Greg Lindsay argue that this approach is “looking at the wrong problem.”
In this piece, Data & Society fellow Karen Levy criticizes the oversimplifications of technological tools that attempt to “solve” rape. “It’s encouraging to see techies trying to address knotty social issues like sexual violence. But if technology is going to intervene for good, it needs to adopt a more nuanced approach — one that appreciates that not every problem can be treated as a data problem.”
Fairness is one of those values that Americans love to espouse. It’s just as beloved in technical circles, where it’s often introduced as one of the things that “neutral” computers do best. We collectively perceive ourselves and our systems to be fair and push against any assertion that our practices are unfair. But what do we even mean by fairness in the first place?
In this article D&S founder, danah boyd unpacks the term fairness from its cultural struggle between equality and equity to the market-driven models of fairness. This is done to get readers thinking about how these different understandings shape society and specifically, to help tech companies think about the costs that can come when market-driven fairness guides your products.
In this essay, Data & Society advisor Ethan Zuckerman explains his belief that the “fallen state of our Internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services.” He suggests some possible ways to support content and services in lieu of the current, dominant model in which users are the product sold to advertisers.
In this essay Data & Society affiliate Kate Crawford asks, “What does the lived reality of Big Data feel like?” She offers “surveillant anxiety — the fear that all the data we are shedding every day is too revealing of our intimate selves but may also misrepresent us.” And she pairs the anxiety of the surveilled with the anxiety of the surveillers: “that no matter how much data they have, it is always incomplete, and the sheer volume can overwhelm the critical signals in a fog of possible correlations.”
magazine article | 01.13.14
“As consumers we’ve been told that we’re in charge, so we enjoy the ritual, even though it’s exhausting. We even decide when we’ll opt out. But what happens when companies walk away from us first?”
In this article, D&S advisor Baratunde Thurston proposes a way for users to take more agency in their relationships with apps and the companies that created them.
Stop whatever you are binge watching and check out the British television show Black Mirror, in this article D&S advisor Baratunde Thurston explains why.