Announcement

Our Statement on the White House’s New Approach to AI Oversight

The Trump administration, which rolled back Biden-era AI oversight rules in January 2025, has announced its intent to test frontier AI models before those models are released to the public. To prevent documented harms from AI systems, government oversight of AI models and the AI industry are long overdue. But any effective approach to securing Americans’ rights and safety must recognize that it will require an ecosystem of approaches along the entire chain of AI development and deployment — not only a single point of intervention, in a lab environment, focused on specific models designated as “frontier.” 

Indeed, the Center for AI Standards and Innovation (CAISI), which will be tasked with pre-deployment testing of these models, argued in its recent report Challenges to the Monitoring of Deployed AI Systems that “post-deployment monitoring — from incident monitoring to field studies — is a crucial practice for confident, wide-spread AI adoption.” Experts have long called for the auditing of AI, impact assessments that evaluate the broader effects of deployed systems, and fundamental data and privacy protections. Early reports of the White House’s plans make no mention of this. The fact that a proposed working group to consider oversight approaches is reported to include only government and industry representatives also raises questions about the effectiveness of this effort. It risks formalizing collaboration with the very companies being evaluated, while excluding independent researchers, civil society organizations, labor representatives, consumer advocates, state regulators, and communities directly affected by AI systems. 

The administration’s new interest in AI oversight comes amid mounting public concern about the technology’s impact on jobs, education, the environment, and more. We urge the administration to listen to growing numbers of Americans across the country, and prioritize oversight regimes that meaningfully address their concerns.