DIGITAL FRONTIER OF JUSTICE: Judge rejects claim AI chatbots protected by First Amendment.
A federal judge has decided that First Amendment protections don’t shield an artificial intelligence company from a lawsuit accusing the firm and its founders of creating chatbots that figured prominently in an Orlando teen’s suicide.
Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell’s mother, Megan Garcia.
“… Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech,” Conway said in her May 21 opinion. “… The court is not prepared to hold that Character.AI’s output is speech.”
Skynet might have something to say about that.