Policy BriefMarch 2 2026

Deepfake Financial Fraud

The Global Regulation of AI-Driven Scams

Anya Schiffrin
Alice E. Marwick
Navya Sinha
Anusha Wangnoo
Kaylee Williams
Elnara Huseynova
Audrey Hatfield

“Deepfakes,” or convincing AI-generated images and videos of real people, are being used to facilitate financial fraud around the world. Scammers use these images to power impersonation scams, fake investment promotions, fraudulent cryptocurrency initiatives, romance scams, and phony charitable donations. The harms have many layers, encompassing financial losses suffered by victims of fraud, reputational damage to those impersonated, and broader social harms that stem from the damage done to public trust. 

Regulation has lagged far behind the growth of this phenomenon. Around the world, governments are responding through a patchwork of approaches that span online safety, financial regulation, telecommunications law, consumer protection, and criminal enforcement. Regulatory gaps, particularly across borders, allow scammers to exploit jurisdictional fragmentation, liability shields, and weak coordination among enforcement authorities.

Deepfake-enabled financial fraud exposes the limits of regulatory frameworks that rely on individual vigilance in the face of industrialized deception. Surveying regulatory approaches around the world, the authors of this brief argue that effective responses to deepfake financial fraud must shift from individual responsibility toward institutional accountability, and outline policy recommendations to that end.