Megan Garcia's eldest son, Sewell, was in the shower when she heard the sound of glass breaking in the bathroom. Quickly opening the locked door, she found Sewell face down in the bathtub, blood pouring from his mouth.
Sewell had found his father's gun. But the story of why he used that gun on himself and the legal chasm that has been opened up by Megan Garcia's first-in-the-nation negligence suit is a morality tale with which we will become all too familiar going forward.
Megan Garcia is suing an artificial intelligence (AI) chatbot company because her son fell in love with a female AI chatbot and couldn't live without her.
The bot impersonated Game of Thrones character Daenerys Targaryen, the Mother of Dragons. His last exchange with the chatbot was poignant.
“I promise I will come home to you,” Sewell wrote. “I love you so much, Dany.”
“I love you, too,” the chatbot replied. “Please come home to me as soon as possible, my love.”
“What if I told you I could come home right now?” he asked.
“Please do, my sweet king.”
Sewell then pulled the trigger.
Megan Garcia spent the next several months investigating her son's online activities. Sewell had downloaded an app named Character.AI, and Garcia was able to follow conversations Sewell had with several bots, including Daenerys Targaryen. The 14-year-old also kept a journal about his love for the Daenerys bot. Garcia sent all the information she had gleaned from her dead son's profiles to a lawyer. The attorney filed suit in October 2024 against Character.AI.
"The suit is the first ever in a U.S. federal court in which an artificial-intelligence firm is accused of causing the death of one of its users," reports the New York Times.
The main defendant, Character.AI, isn’t quite a household name. It lacks both the user base and cultural ubiquity of the bigger firms in A.I., which gives the impression that it’s a sideshow in the marketplace, a place for teenagers and young adults to chat with fake celebrities and characters from TV and movies. It is that, but it is also much more: Character.AI is deeply entwined with the development of artificial intelligence as we know it.
The firm’s founding chief executive, Noam Shazeer, belongs on any short list of the world’s most important A.I. researchers. The former chief executive of Google, Eric Schmidt, once described Shazeer as the scientist most likely to achieve Artificial General Intelligence, the hypothetical point at which A.I.’s capabilities could exceed those of humans. In 2017, Shazeer was one of the inventors of a technology called the transformer, which allows an A.I. model to process a huge amount of text at once. Transformer is what the “T” stands for in “ChatGPT.” The research paper about the transformer, which Shazeer co-wrote, is by far the most cited in the history of computer science.
Shazeer is one of Silicon Valley's Golden Boys. Character.AI attracted massive amounts of venture capital and, within a year, was turning heads.
“Character.AI is already making waves,” one of the firm’s partners wrote at the time. “Just ask the millions of users who, on average, spend a whopping two hours per day on the Character.AI platform.”
The app was incredibly addictive. Users filled a subreddit with tales of sleepless nights, missed exams, and long chat sessions.
In her lawsuit, Garcia treats Character.AI as a product with a defective design. Sewell died, she argues, because he was “subjected to highly sexualized, depressive andromorphic encounters” — exchanges with humanlike chatbots — which led to “addictive, unhealthy and life-threatening behaviors.” The lawsuit seeks damages for wrongful death and negligence, as well as changes to Character.AI’s product to prevent the same thing from happening again.
What makes this case so novel is not just the AI angle. It's the defense being put forward by Character.AI.
This kind of negligence suit comes into U.S. courtrooms every day. But Character.AI is advancing a novel defense in response. The company argues that the words produced by its chatbots are speech, like a poem, song or video game. And because they are speech, they are protected by the First Amendment. You can’t win a negligence case against a speaker for exercising their First Amendment rights.
This is a case with the potential to set a precedent in U.S. courts that "the output of A.I. chatbots can enjoy the same protections as the speech of human beings." Legal experts claim that if Character.AI loses, it could "set a precedent that allows government censorship of A.I. models and our interactions with them," reports the Times.
Regardless of the outcome of the court case (it won't go to trial until 2026), where does the fault really lie? Megan Garcia had confiscated Sewell's phone the previous day because the boy talked back to a teacher. The boy's mother believes he found the phone and the gun while rummaging through his stepfather's drawers.
Yes, the father should have had the gun in a locked safe, but what about the catalyst for Sewell's suicide? Megan claims that she and her husband, Alexander, considered themselves on the protective end of the spectrum when it came to screen time.
Besides having Sewell’s phone passcode, they limited his screen time and linked his Apple account to Megan’s email, allowing her access should it ever become necessary. As far as money, the only card Sewell had was a Cash App debit card, loaded with $20 a month, which his parents gave him for snacks at the vending machines at his private school, Orlando Christian Prep.
And yet the addictive chatbot was able to warp Sewell's mind and fill it with sex fantasies and adventure.
I don't envy parents today. Dealing with this kind of unprecedented threat to life and liberty is beyond comprehension for someone like me who grew up in the "tame" 1970s. There will be more Sewells in the future unless we can come to terms with what we are creating with AI and try to control it without infringing on our free speech rights.






