December 8, 2025 —In our comment to the Food and Drug Administration (FDA), we draw on ongoing Data & Society research to focus on what people’s actual, everyday use of chatbots for mental and emotional support means for the FDA’s approach to generative AI-enabled digital mental health medical devices. Specifically, we show how chatbot use complicates traditional notions of “intended use,” “benefit-risk,” and “postmarket performance,” and we offer recommendations for how the FDA might adapt its frameworks for devices that act through open-ended, relational conversation.
A central finding of our research on AI chatbots and mental health is that large language models (LLMs) regularly function as mental health tools, regardless of how they are labeled by developers. Users form patterns of dependence on these tools that look very different from traditional digital health devices. And while many chatbot developers deliberately frame their products as wellness, lifestyle, or “companionship” tools to avoid classification as a medical device (and thus, the accompanying regulations), if a chatbot looks like therapy and acts like therapy — simulating clinician-patient interaction or delivering structured psychotherapeutic content — it sits in a gray zone where wellness framing no longer aligns with real-world risk. Against this backdrop, our research highlights a series of regulatory challenges that are not well captured by existing wellness/device distinctions.