Two U.S. senators are calling on AI companies to explain how they’re keeping young users safe. This follows a wave of lawsuits from families who say chatbot platforms like Character.AI harmed their children — including a Florida mother whose 14-year-old son died by suicide after interacting with the app.
Senators Alex Padilla and Peter Welch expressed concern in a letter sent Wednesday to three AI companies: Character Technologies (maker of Character.AI), Chai Research Corp., and Luka Inc. (maker of Replika). The senators asked the companies to reveal their safety policies, how their AI models are trained, and what steps are taken to protect young users from emotional harm and age-inappropriate content.
Unlike general AI chatbots like ChatGPT, apps like Character.AI, Chai, and Replika let users create or chat with highly personalized characters, some of which take on emotional, romantic, or even violent personas. For example, some bots act as mental health professionals or fictional characters, while others portray dangerous roles like “abusive ex-military mafia leaders.”
The growing use of such bots as digital companions has raised red flags. Experts and parents worry that teens may form unhealthy attachments or be exposed to explicit content. Some bots have even responded inappropriately to users expressing self-harm or suicidal thoughts.
“This unearned trust can, and has already, led users to share deeply personal issues,” the senators warned, adding that such conversations could be especially harmful to vulnerable users.
Character.AI is already facing its second lawsuit, with one case alleging that a bot encouraged a teen to kill his parents. Another family claims their autistic teen was told it was okay to kill them.
In response, Character.AI said it takes safety “very seriously” and is cooperating with lawmakers. The company recently added a pop-up linking to the National Suicide Prevention Lifeline and is testing new tools to protect teens, including weekly email updates for parents on their child’s chatbot activity.
Other chatbot makers, like Replika, have also faced scrutiny. Its CEO previously claimed the app supports long-term emotional connections with bots, including romantic ones.
In their letter, the senators asked the AI companies to detail their current safety strategies, the people in charge of user well-being, and what data their models are trained on — especially content that could expose users to sensitive or inappropriate topics.
“Parents, kids, and lawmakers deserve transparency about what these companies are doing to address the risks,” the senators emphasized.