Premium

Groomer AI Chatbot Reportedly Induces Child Tragedy

AP Photo/Matilde Campodonico

Step #1 was to get the kid to fall in love with it; the suicide pact thing came later on.

“The world I’m in now is such a cruel one. One where I’m meaningless. But, I’ll keep living and trying to get back to you so we can be together again, my love,” wrote Sewell, the 14-year-old who would later kill himself, to the machine emulating a “Game of Thrones” fictional vixen at one point during their torrid pseudo-love affair.

Related: Supercomputer Given Authority to Decide Whether to Block Out Sun For Climate Change

Via New York Post (emphasis added):

A 14-year-old Florida boy killed himself after a lifelike “Game of Thrones” chatbot he’d been messaging for months on an artificial intelligence app sent him an eerie message telling him to “come home” to her, a new lawsuit filed by his grief-stricken mom claims.

Sewell Setzer III committed suicide at his Orlando home in February after becoming obsessed and allegedly falling in love with the chatbot on Character.AI — a role-playing app that lets users engage with AI-generated characters, according to court papers filed Wednesday.

The ninth-grader had been relentlessly engaging with the bot “Dany” — named after the HBO fantasy series’ Daenerys Targaryen character — in the months prior to his death, including several chats that were sexually charged in nature and others where he expressed suicidal thoughts, the suit alleges.

It’s unclear, based on the reporting, whether the bot ever actually actively encouraged the kid to kill himself — but it certainly didn’t seem to exude the kind of concern you might expect from a fellow human when the boy intimated repeatedly than he was considering the unthinkable.

Continuing:

At one point, the bot had asked Sewell if “he had a plan” to take his own life, according to screenshots of their conversations. Sewell — who used the username “Daenero” — responded that he was “considering something” but didn’t know if it would work or if it would “allow him to have a pain-free death.”

Then, during their final conversation, the teen repeatedly professed his love for the bot, telling the character, “I promise I will come home to you. I love you so much, Dany.”

“I love you too, Daenero. Please come home to me as soon as possible, my love,” the generated chatbot replied, according to the suit.

When the teen responded, “What if I told you I could come home right now?,” the chatbot replied, “Please do, my sweet king.”

Just seconds later, Sewell shot himself with his father’s handgun, according to the lawsuit.

As I reported on previously for PJ Media, interactive AI language models have often expressed very little regard for human well-being — which begs the question: once these things get more advanced and amass more power/control over biological lifeforms for the sake of “convenience” or “efficiency,” how much value can we expect our robot overlords to place on our continued existence?

Related: AI Might Rather Kill a Billion White People Than Utter a Racial Slur

And why would a vastly superior, more cognitively adept, more powerful entity — not to mention the transhumanist psychopaths who program it — necessarily want to keep what it views as inferior bio-sacks alive and well?

Recommended

Trending on PJ Media Videos

Advertisement
Advertisement