Policy Manager Serena Oduro outlines a set of recommendations aimed at the gap between current legislation and the reality of how people engage with chatbots.
Protecting the Public from Chatbot Harms: Aligning State Policy with Research
March 25, 2026
In reponse to mounting cases of users harmed by their interactions with chatbots, including those who have died by suicide, state legislators have been spurred to action. California’s companion chatbot legislation (SB 243) and New York’s AI companion law (A6767), for example, require disclosures that notify users that chatbot responses are not human, and protocols to recognize suicidal ideation and refer users to crisis hotlines. In Illinois, the Wellness and Oversight for Psychological Resources Act bans chatbots that are specifically designed to provide therapeutic or mental health services and also prohibits companies from positioning or marketing chatbots as providing that kind of support. Yet even as such laws pop up across the country, our research highlights gaps between the scope of legislation and the reality of how users engage with chatbots — and the kinds of protections they need.
For over a year, Data & Society has been conducting research on how users engage with chatbots for mental and emotional support. This has involved talking to chatbot users, therapists, clinicians, researchers, technology developers, engineers, designers, and policy and advocacy professionals. Over and over, our researchers heard that users turn to general purpose chatbots when they are alone, often during moments of emotional vulnerability and isolation. They also heard that chatbots serve as private emotional rooms, allowing people to voice shame, grief, anxiety, or relationship concerns they feel they cannot share elsewhere. Overall, our participants were not confused about the nature of chatbots; they know they are not talking to a human. Still, they find comfort, relief, and a nonjudgmental space in these interactions, with chatbots providing continuous, often frictionless affirmation.
Yet across the country, notification requirements are a core intervention within AI companion legislation, often paired with requirements that companies have protocols to detect and prevent suicidal ideation and self harm. This is the case with the California and New York laws mentioned above, as well as with Rhode Island’s S2195, Illinois’ SB 3384, Missouri’s HB2032, Pennsylvania’s SB1090, Washington’s HB2225, and many more. Since most users know they are not engaging with a human, requiring notifications alerting them to this fact is not the most effective approach to reducing the chance of user isolation or harm. And while AI companion legislation often contains the useful requirement that companies have internal protocols to prevent chatbots from pushing users towards suicidal ideation, it typically does not apply any other chatbot design constraints outside of special protections for minors, who are an especially vulnerable class. In other words, they do not apply constraints that would protect users of all ages from developing harmful dependent dynamics with chatbots. Our research shows that chatbots can reinforce detachment from reality, mimic intimacy, or escalate distress, regardless of a user’s age.
State governments play a critical role in ensuring that protections, design constraints, and accountability mechanisms are implemented to protect all users from chatbot harms — including utilizing independent audits and state attorney general enforcement to hold companies accountable. Researchers also play an important role, providing the evidence and experts needed to shape effective policy. Our ongoing research has informed this set of policy recommendations aimed at strengthening legislative movement on the state level.
Recommendations for Design Constraints and User Protections
Impose design constraints on manipulative and dependency-forming chatbot behavior
- Ban practices that simulate human presence, suggest physical proximity, or promote users to isolate themselves from others.
- Restrict design patterns that encourage dependence, including prolonged emotional looping and discouraging disengagement.
- Require time-based downshifts to reduce engagement intensity.
- Treat chatbot behaviors that reinforce harmful beliefs — such as self-harm ideation, delusional claims, or extreme dependency, sycophantic, or reality-distorting behavior — as safety failures rather than acceptable engagement tactics.
Require safety protocols for users in crisis
- Require systems to recognize risk signals (for example: self-harm, escalating despair, psychosis cues).
- Mandate clinically informed and tested behavioral “downshifts” that reduce engagement intensity.
- Require clear transitions to human support, crisis lines, or professional resources.
Standardize frequent, meaningful disclosures
- Replace one-time notifications with regular, accessible reminders that clarify chatbots’ limitations.
- Require that disclosures state a system’s limitations, cognitive and mental health risks, and signs of user dependence.
- Require such disclosures during emotionally heavy exchanges.
Strengthen data governance and prohibit exploitation of user data
- Prohibit the use of user data for advertising, targeting, or unconsented model training.
- Require explicit, specific, revocable user consent for any non-essential data use.
- Require re-consent if changes at the system or company level result in expanded data use or modify how user data is evaluated.
- Mandate transparency around retention periods, model-training practices, and third-party access.
Recommendations to Strengthen Accountability
Support independent audits and public oversight
- Fund external evaluations of chatbot behavior in high-risk scenarios.
- Require audit trails that document detected risks and system responses.
Enforce limits on misleading claims through state attorneys general
- Penalize marketing that implies therapeutic effectiveness or clinical substitution.
- Require accuracy in claims about capabilities (e.g., “clinical-grade,” “therapy bot”).
Mandate incident and transparency reporting to state AGs and/or the responsible government agency
- Require standardized reporting on high-risk events (e.g., self-harm escalation, delusion reinforcement).
- Report aggregate metrics, including frequency of crisis triggers detected, false positives/negatives, time to handoff.
- Require safety monitoring and documentation of model changes.
- Establish expectations for bias evaluation, stability, and post-deployment audits.
- Institute a clear process for independent researchers and oversight bodies to access and assess chatbot behavior.
Require redress pathways for complaints, appeals, and human review
- Mandate a clear, accessible way for users to report harmful responses.
- Establish timelines for responses and escalation (including human review for high-risk complaints).
As policymakers regulate a technology that continuously changes, it is important for legislative efforts to adapt and respond as research like ours better illustrates the nuances of user experiences. To ensure that all users and society at large are protected from the worst impacts of chatbots, it is critical that regulation address the design and internal policy choices that developers and deployers make.
Through our Public Tech Leadership Collaborative, we are gathering a working group of researchers, technical and policy experts, and users to explore further on what design constraints and disclosures look like in practice, and the harms and opportunities that arise from differentiating protections between minors and adults. If you are interested in collaborating with this working group or on our state engagement on this issue, please contact [email protected].