Sunday, February 8, 2026
HomeNewsAI therapy chatbots are attracting new attention as suicides raise alarm

AI therapy chatbots are attracting new attention as suicides raise alarm

Date:

Related stories

A juvenile woman asks AI companion ChatGPT for facilitate in New York City this month. States are pushing to prevent the exploit of artificial intelligence chatbots in mental health to protect vulnerable users. (Photo by Shalina Chatlani/Stateline)

Editor’s Note: If you or someone you know needs facilitate, you can reach the U.S. National Suicide and Crisis Lifeline by calling 988 or texting them. There is also an online chat at 988lifeline.org.

States are passing laws to prevent artificially knowledgeable chatbots like ChatGPT from offering mental health advice to juvenile users, following a trend of people harming themselves after seeking therapy from the AI ​​programs.

Chatbots may be able to offer resources, refer users to mental health professionals, or suggest coping strategies. However, many mental health experts say this is a fine line to walk because vulnerable users in emergency situations require the care of a professional who must abide by the laws and regulations surrounding their practice.

“I have met some of the families who have tragically lost their children after their children interacted with chatbots that, in some cases, were designed to be extremely deceptive, if not manipulative, to encourage children to end their lives,” said Mitch Prinstein, senior scientific advisor at the American Psychological Association and an expert on technology and children’s mental health.

“So in such egregious situations it is clear that something is not working properly and we need at least some guard rails to help in such situations,” he said.

While chatbots have been around for decades, AI technology has become so sophisticated that users may feel like they are talking to a human. The chatbots are unable to provide genuine empathy or mental health advice like a licensed psychologist would, and they are pleasant in nature – a potentially threatening model for someone with suicidal thoughts. Several juvenile people have died by suicide after interacting with chatbots.

States have enacted a number of laws to regulate the type of interaction chatbots can have with users. Illinois And Nevada have completely banned the exploit of AI for behavioral health. new York And Utah Laws have been passed requiring chatbots to explicitly tell users that they are not human. New York law also requires chatbots to detect cases of potential self-harm and refer the user to crisis hotlines and other intervention options.

There may be more laws to come. California And Pennsylvania are among the states that could consider legislation to regulate AI therapy.

President Donald Trump has criticized government regulation of AI, saying it hinders innovation. He signed one in December Implementing regulation The goal is to support the United States’ “global AI dominance” by overriding state artificial intelligence laws and creating a national framework.

Nevertheless, states are making progress. Before Trump’s executive order, Republican Florida Gov. Ron DeSantis last month suggested an “Artificial Intelligence Citizen’s Bill of Rights” that would, among many other things, ban the exploit of AI for “licensed” therapy or mental health counseling and require parental controls for minors who may be exposed to it.

“The rise of AI is the most significant economic and cultural shift taking place today. Denying people the ability to channel these technologies in productive ways through self-government represents federal overreach and gives tech companies free reign,” DeSantis said wrote on the social media platform X in November.

“A false sense of intimacy”

At a US Senate Judiciary Committee hearing Last September, some parents shared their stories about the deaths of their children after continuously interacting with an artificially knowledgeable chatbot.

Sewell Setzer III was 14 years aged when he died by suicide in 2024 after becoming obsessed with a chatbot.

“Instead of preparing for high school milestones, Sewell spent his final months being manipulated and sexually groomed by chatbots designed by an AI company to appear human, gain trust and endlessly preoccupy children like him by crowding out the actual human relationships in his life,” his mother, Megan Garcia, said during the hearing.

Another parent, Matthew Raine, testified about his son Adam, who died by suicide at age 16 after months of talking to ChatGPT, a program from the company OpenAI.

“We believe Adam’s death was preventable and we believe thousands of other teenagers using OpenAI may be in similar danger right now,” Raine said.

Prinstein, of the American Psychological Association, said children are particularly at risk when it comes to AI chatbots.

“Agreeing with everything children say creates a false sense of intimacy and trust. This is really worrying because children, in particular, are developing their brains. This approach will be unfairly attractive to children in a way that they may not be able to use reason, judgment and restraint in the way that adults are likely to when interacting with a chatbot.”

The Federal Trade Commission in September started an investigation into seven companies that make these AI-powered chatbots, asking what measures are being taken to protect children.

​​”AI chatbots can effectively mimic human characteristics, emotions, and intentions and are generally designed to communicate like a friend or confidant, which may lead some users, particularly children and teens, to trust and develop relationships with chatbots,” the FTC said in its order.

Companies like OpenAI have replied by saying they are working with mental health experts to make their products safer and limit the risk of self-harm among their users.

“Working with mental health experts who have real-world clinical experience, we taught the model to better recognize stress, de-escalate conversations, and guide people to professional treatment when needed,” the company wrote in a statement last October.

Legislative efforts

With federal action pending, efforts to regulate AI chatbots at the state level have had constrained success.

Dr. John “Nick” Shumate, a psychiatrist at Harvard University’s Beth Israel Deaconess Medical Center, and his colleagues checked Legislation to regulate artificial intelligence systems in the mental health field in all states between January 2022 and May 2025.

The review found 143 draft laws that were directly or indirectly related to the regulation of AI and mental health. As of May 2025, 11 states had enacted 20 laws that researchers found were meaningful, direct, and explicit in the way they attempted to regulate mental health interactions.

They concluded that legislative efforts tend to fall into four distinct areas: professional oversight, harm prevention, patient autonomy, and data management.

“You’ve seen safety laws for chatbots and companion AIs, particularly around self-harm and suicidal response,” Shumate said in an interview.

New York has enacted one Law Last year, AI chatbots had to remind users every three hours that they were not human. The law also requires the chatbot to recognize the potential for self-harm.

“There is no denying that we are in a mental health crisis in this country,” Democratic New York state Sen. Kristen Gonzalez, the bill’s sponsor, said in an interview. “But the solution should not be to replace human assistance from licensed professionals with untrained AI chatbots that can leak sensitive information and lead to far-reaching results.”

In Virginia, Democratic Representative Michelle Maldonado is preparing legislation for this year’s session, which would set limits on what chatbots can communicate to users in a therapeutic setting.

“The federal level has been slow to pass things and also slow to create legislative language around things, so we had no choice but to close that gap,” said Maldonado, a former technology attorney.

She noted that states have enacted privacy laws and restrictions on non-consensual intimate images, licensing requirements and disclosure agreements.

Democratic New York State Senator Andrew Gounardes, who sponsored legislation regulating AI transparency, said he has seen the growing influence of AI companies at the state level.

And that’s concerning to him, he said, as states seek to harass AI companies over issues ranging from mental health to misinformation and beyond.

“They hire former employees to become public affairs officials. They hire lobbyists who know the lawmakers so they can join them. They hold events, you know, at the Capitol, at political conferences to try to build goodwill,” Gounardes said.​​

“These are the richest, wealthiest, largest companies in the world,” he said. “And so we really don’t need to be wary for a moment of this kind of concentrated power, money and influence.”

Stateline reporter Shalina Chatlani can be reached at shatlani@stateline.org.

This story was originally produced by State borderwhich is part of States Newsroom, a nonprofit news network that includes West Virginia Watch, and is a 501c(3) public charity supported by grants and a coalition of donors.

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here