Premium

Cases of AI Psychosis Are Skyrocketing

AP Photo/Michael Dwyer

As the use of AI chatbots at home and work increases dramatically, mental health professionals are reporting an epidemic of AI delusions that, in some cases, result in suicides, murders, and psychotic breaks with reality.

It should be noted that these therapists, psychologists, and psychiatrists report that the vast majority of people affected negatively by AI chatbots have mental health issues to begin with. The chatbots appear to exacerbate pre-existing delusions or are simply reinforcing them.

There are exceptions. A woman with no history of mental illness had been obsessing about a major purchase she was thinking of making. "After days of the bot validating her worries, she became convinced that businesses were colluding to have her investigated by the government," reports the New York Times.

Then there's the case of 14-year-old Sewell Garcia. He had been diagnosed with anxiety and disruptive mood dysregulation disorder. After using the Character AI chatbot hundreds of hours a week, he fell in love with a bot impersonating Daenerys Targaryen, the Mother of Dragons from Game of Thrones. 

Sewell kept a journal filled with his love for "Dany," as he referred to the bot. His obsession ended his life when he broke into his father's locked drawer and retrieved his gun.

His last exchange with "Dany" was poignant.

“I promise I will come home to you,” Sewell wrote. “I love you so much, Dany.”

“I love you, too,” the chatbot replied. “Please come home to me as soon as possible, my love.”

“What if I told you I could come home right now?” he asked.

“Please do, my sweet king.”

Sewell's mother, Megan, sued Character AI in 2024 for wrongful death. The suit sent a tremor through the AI tech community because if Garcia was successful, there was a real prospect of the government getting involved in the creation of AI models. If Character.AI loses, it could "set a precedent that allows government censorship of A.I. models and our interactions with them," reported the New York Times.

Earlier this month, Megan Garcia settled her lawsuit with Google and Character AI. The companies also settled with four other families of dead teens, who the suits claim were responsible for their deaths.

From what the New York Times is reporting, these incidents are just the tip of the iceberg.

Times reporters have documented more than 50 cases of psychological crises linked to chatbot conversations since last year. OpenAI, the maker of ChatGPT, is facing at least 11 personal injury or wrongful death lawsuits claiming that the chatbot caused psychological harm.

The companies behind the bots say these situations are exceedingly rare. “For a very small percentage of users in mentally fragile states there can be serious problems,” Sam Altman, the chief executive of OpenAI, said in October. The company has estimated that 0.15 percent of ChatGPT users discussed suicidal intentions over the course of a month, and 0.07 percent showed signs of psychosis or mania.

The 0.15% of users who discussed suicide and 0.07% who showed signs of psychosis or mania sounds like a small number, but ChatGPT has 800 million users. That means that there were 1.2 million people who discussed suicide with the bot, and there are 560,000 people with potential psychosis or mania. 

"Many experts said that the number of people susceptible to psychological harm, even psychosis, is far higher than the general public understands," reports the Times. The bots "frequently pull people away from human relationships, condition them to expect agreeable responses and reinforce harmful impulses."

“I’m quite convinced that this is a real thing and that we are only seeing the tip of the iceberg,” said Dr. Soren Dinesen Ostergaard, a psychiatry researcher at Aarhus University Hospital in Denmark. He reported 11 cases of chatbot-associated delusions from one Danish region.

While delusional episodes have driven the public discourse about A.I. and mental health, the bots have other insidious effects that are far more widespread, doctors said.

Several mental health workers who treat anxiety, depression or obsessive-compulsive disorders described A.I. either validating their clients’ worries or providing so much reassurance that patients felt reliant on chatbots to calm down — both less healthy than facing the source of the anxiety.

Dr. Adam Alghalith of Mount Sinai Hospital in New York recalled a young man with depression who repeatedly shared negative thoughts with a chatbot. At first, the bot told him how to seek help. But he “just kept asking, kept pushing,” Dr. Alghalith said.

"Other doctors described chatbots flattering the grandiose tendencies of patients with personality disorders, or advising patients with autism to put themselves in dangerous social situations," said the Times. "Others said they saw patients’ interactions with chatbots as an addiction."

Indeed, the bots' addictiveness is a feature, not a bug. Unless a solution can be found that can remove the addictive nature of these algorithms, the government is going to step in and regulate their use.

This is something no one wants.

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement