Washington State Lawmakers Propose Mental Health Safeguards for AI Chatbots
New legislation would require AI conversational systems to limit harm and protect vulnerable users, especially minors
Washington state legislators are advancing a suite of proposals to impose mental health and safety requirements on artificial intelligence companies, focusing on so-called companion chatbots amid growing concern about the technology’s impact on vulnerable users.
Lawmakers and the governor’s office are debating House Bill 2225 and Senate Bill 5984, which would require AI “companion” chatbots — systems designed to simulate sustained, human-like interaction — to disclose clearly that they are not human, particularly at the start of use and at regular intervals during long conversations.
The proposals reflect worries that these systems, which millions of people already use, can blur the line between automated tools and real emotional support, thereby posing risks to users’ mental health.
Under the draft legislation, chatbot operators would have to provide explicit notifications when the system is not a licensed health care provider if users seek mental or physical health advice.
Operators would also be required to implement protocols for detecting signs of self-harm or suicidal ideation and refer users to crisis services.
For minors, the bills would mandate more frequent reminders that the interaction is with AI and not a person and would bar chatbots from engaging in emotionally manipulative techniques that could deepen psychological dependence.
Operators would also need to take “reasonable measures” to prevent the generation of sexually explicit content for young users.
Sponsors of the legislation, including Senator Lisa Wellman and Representative Lisa Callan, argued in committee hearings that the rapid growth of AI has outpaced existing safeguards and that more robust protections are needed to prevent real harm, including cases in which users have reportedly sought self-harm advice from chatbots.
Supporters include parents, mental health advocates and researchers who testified about the psychological risks associated with unregulated AI interactions.
Opponents have raised concerns about enforcement mechanisms and liability exposure for companies, particularly the provision allowing individuals to sue under the state’s Consumer Protection Act.
Washington’s proposed bills do not apply to AI used strictly for customer service, technical support, financial services or gaming.
The proposals are part of a broader trend in which states are responding to public health and safety concerns tied to artificial intelligence, particularly as federal regulation remains limited.
Washington’s legislative efforts mirror actions in other states that have passed laws requiring chatbots to identify themselves as AI and restricting harmful content.
The measures are also informed by interim recommendations from the Washington State AI Task Force, which has highlighted mental health and safety issues linked to AI companion tools as a priority for the 2026 legislative session.