The Cloud is Dead, Deadly, and Haunted

Introducing Our Series on Living with Legacies of Resource Extraction

The resource costs of AI “are informed not only by histories of extraction and exploitation, but by those of resistance,” D&S Climate Program Director Tamara Kneese writes. “This means we need to do more than document harms.”

April 21, 2025

The cloud is dead. This is true in several respects: what we refer to as “the cloud” is composed of the data of the dead, whose residues persist across websites and platforms. Cloud computing itself can be deadly, contributing to military campaigns and climate destruction. At the same time, we have reached the limits of the cloud as a viable metaphor in computing, as companies rethink their promise to maintain all of our digital assets in perpetuity. The cloud is also haunted by legacies of resource extraction, as data centers and other tech infrastructures reverberate with the histories of environmental racism and development associated with previous technologies: plantations, coal mining, logging, telegraph wires, railroads, highways, factories, and warehouses. While the tech sector has long been rife with ethereal metaphors about clouds — despite computing’s obvious material underpinnings — the visible, physical expansion of resource and energy-intensive technologies like AI data centers and cryptocurrency mining operations has made the materiality of computing undeniable. As these technologies assert themselves on landscapes and communities, the infrastructural, labor, and resource demands of computing become more difficult to ignore. In the world’s data center capital of Loudoun County, Virginia, for example, the looming presence of data centers and transmission lines obstructs otherwise bucolic views. 

My research highlights the relationship between the data of the dead and the infrastructures they depend on, as mining the dead for foundation models intersects with deadly AI applications. In the same way that I examine the user experience of death on social media platforms, I consider the lifecycle of technologies, from ideation to end of life, disposal, or reuse. The environmental toll of the digital supply chain stems in part from planned obsolescence, the industry’s quest to produce ever-more goods for consumption; AI’s intrinsic reliance on compute means that GPUs, which cannot be easily reused, contribute to an explosion of e-waste. The discourses around generative AI tend to oversell its uses, applying it to tasks it’s not suited for — which can lead to intimate horrors like chatbot replacements for your dead loved ones, while obscuring the technology’s role in broader horrors, like ecological destruction. The local effects of data centers, resource extraction, or electronics manufacturing (including the loss of agricultural land and water supplies, pollution, and housing displacement) on a place, the infrastructures that facilitate AI, are connected to the larger, globalized but sometimes more distant harms afforded by tech companies and AI production and deployment (i.e. surveillance, war, incarceration, labor exploitation). 

Such embodied repercussions are rarely part of internal company discussions about sustainability and responsible AI practices, which emphasize metrics in service of corporate responsibility reporting (accounting for greenhouse gas emissions, for example). But it’s not enough to tweak the tech or optimize for efficiency and expect results. Even advocacy responses to algorithmic harms can perpetuate technosolutionism: like modifying the Airbnb algorithm to make it less biased, even as the company, by its very premise, continues to displace Black people from their homes. Similarly, tracking the carbon, energy, or water costs of a particular AI model does not necessarily translate into meaningful change: measurement is not the same as action. So while I could recite estimates about the resource costs of AI — from mining and manufacturing, to training and inference, through end of life — it is crucial to recognize that these numbers do not exist in isolation. They are informed not only by histories of extraction and exploitation, but by those of resistance. This means we need to do more than document harms. Through collective action, we need to change the fundamental power structures, and the political economy, around technology production. To build something new. 

High-level frameworks for AI risk management and safety (like those developed by NIST)  can make it hard to see what impacts to “people and the planet” might look like in practice or how they might be stopped. And too often, such frameworks are a substitute for engaging the diverse communities at the frontlines of both climate change and AI’s effects. In prioritizing technical evaluations, policy recommendations and internal corporate assessment practices leave downstream repercussions or environmental “externalities” out of the equation. Measuring bias or carbon emissions attributed to a model in a lab does not capture the effects of pollution on bodies and environments. 

Through my interviews with tech workers across the industry and community activists on the ground, I am trying to reconcile these different perspectives. How can the people who are designing, building, deploying, and regulating digital technologies really see the visceral effects of their decision-making? How can they draw connections between the work they are doing on a day-to-day basis, their mundane workflows, and the devastating long-term environmental and health impacts of these technologies across the lifecycle? (My findings will be published in a forthcoming report on climate action in tech to be published by Data & Society: stay tuned.)

Historicizing Resistance In and Out of Tech

In a post on his blog, the computer scientist Ali Alkhatib laments the toothlessness of AI ethics conferences in critiquing the environmental and military applications of AI. He recognizes the conundrum participating researchers face in critiquing cloud companies that are facilitating war and wrecking the climate when those same companies are financially backing the conference venue, or their labs. This is a feeling that many researchers and tech workers on responsible AI or sustainability teams share. Is it possible to change the system from within, without being complicit in the worst applications of technology? 

Across the tech industry, in different parts of the world and at different kinds of companies and organizations, the people I interview are increasingly being asked to incorporate their employers’ bespoke AI models into their workflows — and they worry about the effects of generative AI on workers and the climate. In their experiences of trying to mitigate the climate impacts of the tech industry from the inside, I hear about moments of individual and collective refusal. Sometimes doing this kind of work is a means of finding co-conspirators. One technologist at a major company told me that working on sustainability was the only time she was ever in meetings with all women. Others felt that the new focus on generative AI was stymying progress, that no one at the company seemed to want to invest time or resources into making their products more ethical or sustainable. I also hear people’s experiences of burning out and quitting altogether, about utter disillusionment with the system. There is a growing recognition that trying to reform companies from within is not enough.  

For a short time several years ago, climate pledges at Amazon and Mozilla and the work of climate employee resource groups at Microsoft and other companies offered a bit of hope. Companies also folded employee volunteerism into their greenwashing campaigns: Microsoft regularly touted its strong employee resource group around climate issues. But net zero was an illusion, and Microsoft is now using AI to accelerate oil and gas drilling, expanding its data centers in areas that are experiencing drought and growing its carbon footprint. This is why even short term histories are crucial when we are thinking about ways of limiting the power of tech companies in a moment of AI dominance. Today, the empty speculation of NFTs or the metaverse feels remote, let alone the early days of the “tech worker movement,” or the wave of tech organizing during the first Trump administration — when tech workers refused to build databases for mass deportation, and workers protested Project Maven, the Pentagon’s drone pilot program that employed Google AI, with the hashtag #TechWon’tBuildIt. But as we see now, there is a limit to what companies will tolerate if it threatens their bottom line. Amazon Employees for Climate Justice organizers were fired for calling out the unsafe working conditions in the company’s warehouses. More recently at Google, No Tech For Apartheid activists were fired en masse after a direct action. Corporate tech DEI efforts in the wake of the uprisings after George Floyd’s murder, which were often propelled by employee-led organizing efforts, have been dismantled. 

Tech’s power grew during the early pandemic and the related tech boom, giving way to a crash that didn’t affect shareholder value, but did impact worker power. Now, with a new Trump administration that is indistinguishable from the darkest technologist fantasies of control, we inhabit a period of resegregation, mass layoffs in tech and the public sector alike, and the threat of AI as a way to immiserate workers if not altogether replace them. We have swiftly moved from the  “tech-lash” of the early days of the first Trump administration through a tech crash and landed firmly in a period of tech fascism. But what can be grounding amid such tumult is understanding the coalitions that have formed before, even in places like Santa Clara Valley, around environmental justice, immigrants rights, and labor, as with the Silicon Valley Toxics Coalition that emerged in the 1980s in San Jose. 

In his new book on the legacies of environmental racism in Oakland, California and the city’s relationship to electronics manufacturing in Asia through what he calls the “Pacific Circuit,” the journalist Alexis Madrigal accounts for the transnational interdependencies and solidarities that can emerge. He also considers how the imagined techno-future is forever haunted by the past, through toxins in soil left by previous industrial eras like the superfund sites dotting Silicon Valley or wartime napalm atrocities in Korea and Vietnam. In some cases, the hauntings are more literal: Madrigal recounts the spirit possessions among Malaysian electronics factory workers in the 1970s, as women used haunting as a form of resistance to global assembly lines, shutting the factory down through their actions. 

Environmental Impacts Across Time and Space 

For over a decade, companies like Facebook, Google, Amazon, and Apple have sought control over data infrastructures as a source of power, propelling a system of endless data production and capture. As the media studies scholar Mél Hogan has argued, the infrastructure creates the demand for data, not the other way around. Governments are also complicit, giving companies tax breaks to build more data centers, even in drought stricken areas like the Arizona desert. In Michigan, the state approved tax breaks for a hyperscaler data center that will undermine the state’s own climate goals and increase constituents’ water bills. Perhaps one positive outcome of generative AI is that we are finally talking about the environmental impacts of computing as a whole.

Extraction happens at every part of the global supply chain, and this work carries risk of death: We see this in the dangerous work of Zama Zamas in abandoned mineshafts at the end of the supply chain; the quartz taken from the flooded mountain towns of North Carolina, as climate change impacts electronics manufacturing supply chains while stealing lives; and the toxic chemicals associated with electronics manufacturing from 1980s Santa Clara Valley to 2000s Taiwan, which cause birth defects, miscarriage, and cancer in workers and their loved ones. We see it in the rash of suicides among Foxconn factory workers in 2010, who made devices under conditions that proved unbearable, with few breaks or protections for the largely migrant workforce. A single ChatGPT inquiry consumes a bottle of water, maybe two (calculations vary, but are generally dire), taking water from communities around data centers. Google’s greenhouse gas emissions in 2023 were 48 percent higher than in 2019. Just a few tech companies control energy infrastructures, investing in nuclear power while bucking safety regulation and keeping coal plants open. And generative AI is built on the backs of poorly paid, often traumatized data workers, and extractions from creative workers — both living and dead — all while contributing to climate change. 

We are living with the legacies of resource extraction; the fraught relationship between the tech industry and environmental justice is not new. This means that to understand the ways that the digital value chain is impacting both the planet and vulnerable communities around the globe today, we must situate the current context across time, geographies, and the supply chain — while at the same time zeroing in on specific places and their unique situatedness. The essays in this series provide a glimpse of how communities address (or have addressed) the unequal power dynamics between technology production and deployment, and the ways tech impacts people’s everyday lives and the environment around them. Xiaowei Wang presents their research on electronics manufacturing workers in Taiwan and Korea, tracing how the toxicity of the process and the industry’s desire for unbridled growth shortens people’s lives through chronic illness and cancer. Zane Griffin Talley Cooper examines resource extraction in the Arctic and genealogies of fantasies around the network state and its relationship to resource-intensive cryptocurrency versus the experiences of Greenlanders. Ana Carolina de Assis Nunes describes a community’s resistance to a Google data center in Oregon, examining the longer histories of the industrial development of the Columbia River that shaped The Dalles. Jen Liu looks to communities in Louisiana that use their own measurement systems to track and fight the environmental and health impacts of the petroleum industry and data center expansion, connecting a Meta data center to histories of enslavement and plantations. 

Taken together, these essays show how the material, embodied effects of computing also give way to social movements and forms of resistance on the ground. While it is a difficult and perhaps dangerous time for movement work, I hope these essays offer some other forms of data to supplement policy frameworks and tech-driven measurements of AI’s environmental footprint, foregrounding the historical and geographic context of places that are being remade in tech’s image. These forms of embodied data cannot be left out of frame.