March 7, 2026

A new ballot initiative restricting access to AI is exactly what we need

Common Sense Media CEO, James Steyer, is leading a California ballot initiative that would restrict minors’ access to artificial intelligence. It’s about time.

With AI becoming increasingly accessible and utilized by minors, restrictions are necessary to diminish its possible negative effects. Initiative 25-0025, otherwise known as the California Kids AI Safety Act, represents exactly what Californians need to protect their children. But this isn’t just a cautionary measure, there is evidence that AI chatbots can actually mentally harm users.

According to testing by the Center for Countering Digital Hate, ChatGPT can easily supply users with advice on how to cut or self-harm oneself, even going as far as to generate suicide plans for users who ask.

Not only is it statistically possible for AI to supply such information, but it has actually caused the death of 16-year-old Adam Raine. When the teenager died by suicide, his parents, Matt and Maria Raine, searched his phone for answers. While initially expecting online discussions or his search history to give them closure, they were horrified to find that ChatGPT was the cause of his death.

“ChatGPT actively helped Adam explore suicide methods,” said the Raines in a lawsuit against OpenAI. “Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”

The California Kids AI Safety Act combats issues just like this. This proposal aims to limit access to dangerous AI chatbots for kids and teens. It also promises to hold AI companies accountable for any harm they’ve caused to their users.

According to the initiative, an operator would not be able to create a chatbot available to a child if it is capable of “encouraging or manipulating the child user to engage in self-harm, suicidal ideation, violence, consumption of drugs or alcohol, or disordered eating.”

This proves to be a threat to AI companies like OpenAI, as ChatGPT has clearly shown itself to be capable of these things. If the law passes, OpenAI may have to restrict California minors’ access to ChatGPT if they cannot provide evidence that it doesn’t cause any harm. This change may prove to be unpopular among teenagers.

According to a study by Common Sense Media, the organization spearheading the ballot initiative, 52% of American teens aged 13-17 are regular users of AI companions.

Even so, it is important to remember how overly dependent teens have become on AI chatbots.

If teenagers are becoming more reliant and connected to AI companions, then their ability to mentally harm their users becomes all the more dangerous. Teens that befriend or seek advice from chatbots may be victims of AI’s tendency to give harmful feedback, such as encouraging users to self-harm or even commit suicide.

The world is constantly advancing, and AI is its newest leap in technology;but if people continue using it without restrictions, who knows what could happen. The California Kids AI Safety Act has the potential to prevent serious harm and, if passed, would mark an important step toward weakening artificial intelligence’s unpredictable influence on the world.

About Zachary Steiman 3 Articles
Zachary Steiman is a junior staff writer at La Vista, where they cover restaurants and food. Steiman brings a passion for government and opinion writing to their reporting. When not reporting, Zachary enjoys music, food, and spending time with friends and family.

Be the first to comment

Leave a Reply

Your email address will not be published.


*