- Co-authors Anya Schiffrin and Alice E. Marwick discuss the brief on the Tech Policy Press podcast and the Siliconsciousness podcast.
- On Vox, Marwick discusses the growing prevalence and mounting toll of AI-enabled scams.
- On Marketplace, Marwick emphasizes that coordination between services is crucial to catching AI-faciliated scams before they happen.
“Deepfakes,” or convincing AI-generated images and videos of real people, are being used to facilitate financial fraud around the world. Scammers use these images to power impersonation scams, fake investment promotions, fraudulent cryptocurrency initiatives, romance scams, and phony charitable donations. The harms have many layers, encompassing financial losses suffered by victims of fraud, reputational damage to those impersonated, and broader social harms that stem from the damage done to public trust.
Regulation has lagged far behind the growth of this phenomenon. Around the world, governments are responding through a patchwork of approaches that span online safety, financial regulation, telecommunications law, consumer protection, and criminal enforcement. Regulatory gaps, particularly across borders, allow scammers to exploit jurisdictional fragmentation, liability shields, and weak coordination among enforcement authorities.
Deepfake-enabled financial fraud exposes the limits of regulatory frameworks that rely on individual vigilance in the face of industrialized deception. Surveying regulatory approaches around the world, the authors of this brief argue that effective responses to deepfake financial fraud must shift from individual responsibility toward institutional accountability, and outline policy recommendations to that end.