In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights

TALLAHASSEE Fla AP A federal judge on Wednesday rejected arguments made by an artificial intelligence company that its chatbots are protected by the First Amendment at least for now The developers behind Character AI are seeking to dismiss a lawsuit alleging the company s chatbots pushed a teenage boy to kill himself The judge s order will allow the wrongful death lawsuit to proceed in what legal experts say is among the latest constitutional tests of artificial intelligence The suit was filed by a mother from Florida Megan Garcia who alleges that her -year-old son Sewell Setzer III fell victim to a Character AI chatbot that pulled him into what she described as an emotionally and sexually abusive relationship that led to his suicide Meetali Jain of the Tech Justice Law Project one of the attorneys for Garcia mentioned the judge s order sends a message that Silicon Valley demands to stop and think and impose guardrails before it launches products to field The suit against Character Technologies the company behind Character AI also names individual developers and Google as defendants It has drawn the attention of legal experts and AI watchers in the U S and beyond as the hardware rapidly reshapes workplaces marketplaces and relationships despite what experts warn are potentially existential risks The order certainly sets it up as a possible test event for selected broader issues involving AI noted Lyrissa Barnett Lidsky a law professor at the University of Florida with a focus on the First Amendment and artificial intelligence The lawsuit alleges that in the final months of his life Setzer became increasingly isolated from reality as he engaged in sexualized conversations with the bot which was patterned after a fictional character from the television show Event of Thrones In his final moments the bot explained Setzer it loved him and urged the teen to come home to me as soon as doable according to screenshots of the exchanges Moments after receiving the message Setzer shot himself according to legal filings In a message a spokesperson for Character AI pointed to a number of safety features the company has implemented including guardrails for children and suicide prevention guidance that were communicated the day the lawsuit was filed We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe the message commented Attorneys for the developers want the scenario dismissed because they say chatbots deserve First Amendment protections and ruling otherwise could have a chilling effect on the AI industry In her order Wednesday U S Senior District Judge Anne Conway rejected particular of the defendants free speech indicates saying she s not prepared to hold that the chatbots output constitutes speech at this stage Conway did find that Character Technologies can assert the First Amendment rights of its users who she discovered have a right to receive the speech of the chatbots She also determined Garcia can move forward with alleges that Google can be held liable for its alleged role in helping develop Character AI Several of the founders of the platform had previously worked on building AI at Google and the suit says the tech giant was aware of the risks of the instrument We strongly disagree with this decision reported Google spokesperson Jos Casta eda Google and Character AI are entirely separate and Google did not create design or manage Character AI s app or any component part of it No matter how the lawsuit plays out Lidsky says the circumstance is a warning of the dangers of entrusting our emotional and mental robustness to AI companies It s a warning to parents that social media and generative AI devices are not unfailingly harmless she mentioned