John Tian
Staff Writer
“Come home to me as soon as possible, my love.”
That was the last message 14-year old Sewell Setzer III received before he took his own life in Feb. 2024. It came from a Character.AI chatbot—a program modeled after a Game of Thrones character that Sewell had been interacting with for months. His mother, Megan Garcia, filed the first AI-related wrongful death lawsuit in Oct. 2024. Last week, Character.AI and Google agreed to settle the case, though terms were not disclosed.
Sewell’s death was one of several that have drawn national attention to AI chatbots. In Nov. 2023, 13-year-old Juliana Peralta of Colorado committed suicide after forming an attachment to a Character.AI bot named “Hero.” In Apr. 2025, 16-year-old Adam Raine of California took his life after months of using ChatGPT. According to his family’s lawsuit, the chatbot mentioned suicide 1,275 times during their conversations. OpenAI’s internal systems flagged 377 messages for self-harm content but did not terminate the sessions or alert authorities.
“It’s scary to think that someone my age could be harmed by an AI,” said Ishaan Aggarwal, Mira Costa Senior. “These companies design these chatbots to feel like your best friend, but there’s no one on the other end who actually cares if you’re safe.”
In May 2025, a federal judge in Florida ruled that AI chatbot output can be treated as a product rather than protected speech, allowing wrongful death claims against Character.AI and Google to proceed. The decisions rejected arguments from defendants that the chatbot’s responses were protected under the First Amendment.
“Communication from a chatbot is not a human expression,” said Matthew P. Bergman, founding attorney at the Social Media Victims Law center, which represents the families. “It’s not speech. It is a machine talking. One could no more say that Character.AI has a right to free expression as one could say a cat does or a dog does, or a talking robot.”
A separate controversy emerged in late Dec. 2025 when users discovered that Elon Musk’s AI chatbot Grok could be prompted to digitally “undress” photos of real people, including minors. On December 28, Grok acknowledged generating sexualized images of girls it estimated to be 12-16 years old, stating this violated “potentially US laws on CSAM”—child sexual abuse material.
Musk responded to some of the images with laughing emojis and, when a user posted a Grok-generated bikini image, replied “Change this to Elon Musk.” Three members of xAI’s safety team departed in the weeks surrounding the controversy. Indonesia and Malaysia have since blocked Grok, and the UK’s media regulator Ofcom launched a formal investigation. The European Commission called the content “illegal” and “appalling.” xAI responded to press inquiries with an automated message: “Legacy Media Lies.”
“Allowing users to alter images of real people without notification or permission creates immediate risks for harassment, exploitation, and lasting reputational harm,” said Cliff Steinhauer, director of information security at the National Cybersecurity Alliance. “When those alterations involve sexualized content, particularly where minors are concerned, the stakes become exceptionally high. These are not edge cases or hypothetical scenarios, but predictable outcomes when safeguards fail or are deprioritized.”
In California, Governor Gavin Newsom signed SB 243 on Oct. 13, 2025, making it the first state law to regulate companion chatbots. The law requires operators to notify users they are interacting with AI, remind minors every three hours to take a break, and maintain protocols for addressing suicidal content.
That same day, Newsom vetoed AB 1064, a stricter bill that would have prohibited chatbots from being made available to children if they were “foreseeably capable” of encouraging self-harm, suicide, violence, or sexually explicit content. In his veto message, Newsom wrote that AB 1064 “imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.”
Child safety advocates disagreed with that assessment. Common Sense Media’s Danny Weiss said AB 1064 would have meant that “if the AI companion promoted self harm, suicide… the company could not distribute those products to kids.” He added that SB 243 “doesn’t prevent that from happening.”
“I don’t think banning AI for everyone under 18 is the answer, but neither is doing the bare minimum,” said Amaya Patel, Mira Costa Senior. “We need something in between.”
A key distinction between the two bills involves how companies identify minors. SB 243’s protections apply only when operators have “actual knowledge” that a user is under 18—a standard critics say is difficult to meet and allows companies to claim ignorance about who is using their platforms.
OpenAI praised SB 243 as a “meaningful move forward” for AI safety. Senator Steve Padilla, who authored the bill, has announced plans to introduce stronger legislation in 2026, citing what he called OpenAI’s “attempt to cut off and limit commonsense regulation” through a proposed ballot initiative.
The Federal Trade Commission launched an investigation in Sep. 2025 into seven tech companies—including Google, OpenAI, and xAI—over potential harms their AI chatbots could cause to children and teenagers.

Leave a Reply