Policy and Accountability

Why We’re Endorsing the AI Environmental Impacts Act

 

 

 

February 1, 2024

Earlier today, US Senator Ed Markey introduced the AI Environmental Impacts Act of 2024. Here’s why Data & Society decided to endorse the bill.

As an independent research institute, D&S typically does not endorse legislation. We usually decline to endorse bills because we believe our role, first and foremost, is to advance an empirical, sociotechnical research base that moves the field forward and better illuminates issues for policymakers.

We are taking the unusual step of endorsing the AI Environmental Impacts Act because we believe it aligns with that approach.

By calling attention to the necessity of studying the environmental impacts of AI, and putting government resources behind it, the bill builds on an emerging field of research that is interrogating the intense compute power and energy consumption AI products require.

Right now, we have only a limited understanding of the depth and scale of AI’s environmental impacts — but what we do know is alarming. Artificial intelligence systems operate at a significant cost to the planet. Researchers have found that the geographic location of data centers and the time of day when machine learning (ML) models are trained play a significant role in the carbon intensity of cloud computing. Along with their large carbon footprint, large language models also have a massive water footprint, which can be devastating to drought-stricken regions that host data centers. ML models require sustained, intense computational power, energy, and materials throughout the AI lifecycle, including through training and deployment. Yet even with these findings on AI’s environmental impacts, there is not yet an accepted standard for measuring and reporting greenhouse gas emissions and other climate-related metrics. More research is needed to determine how best to mitigate the various environmental impacts of AI.

This legislation would help to advance a rigorous and empirical understanding of AI’s impacts on the environment and create standards for measurement and reporting. It directs the Environmental Protection Agency to study and publicly report on the energy and pollution impacts of AI models, their physical hardware, data centers, and the disparate impacts in AI’s negative environmental consequences. It also mandates the National Institute of Standards and Technology to convene a consortium to identify the methodologies and standards needed to “measure and report the full range of environmental impacts of artificial intelligence.”

Much policy attention has rightly focused on issues like algorithmic discrimination, threats of mis/disinformation, and labor disruptions. Essentially, this focus is on the outputs of an algorithmic system: what does the AI produce? Is the output biased? Does it generate discriminatory decisions that foreclose opportunities for people? Can bad actors exploit the system to generate their own malicious outputs? How will AI’s content production abilities impact workers and the quality of work?

As important as those issues are, attention to them has also had the effect of crowding out others, such as interrogating AI’s supply chain — the infrastructure that goes into physically building, developing, training, and deploying an AI system on the front end. Components like transistors, lithium batteries, and GPUs make up the physical AI infrastructure, and they all come with substantial environmental costs, including the mining of rare earth minerals. The AI supply chain also includes the enormous energy externalities of training LLMs and various forms of exploited labor across the supply chain, from mining to e-waste economies. By foregrounding climate effects, the AI Environmental Impacts Act signals the necessity for all stakeholders and policymakers to assess the environmental externalities associated with AI.

Much has been made of AI’s purported existential risks — claims often propelled by an “AI safety” cadre of longtermists more interested in speculative futures than present day concerns. But the more obvious existential risks are already here. Global warming and climate disaster are affecting people now. Given AI’s potential to dramatically ramp up the consumption of ever-more-precious physical resources and wreak downstream impacts on communities and habitats, it is critical to have a grounded understanding of the technology’s impact on the natural environment. The AI Environmental Impacts Act would facilitate a much-needed reckoning with the ways AI stands to reshape the world as we know it, and enable an appropriate response.