Governor Gavin Newsom enforces strict AI law to protect kids from chatbot dangers

More Articles

Tejaswini Deshmukh
Tejaswini Deshmukh
Tejaswini Deshmukh is the contributing editor of RegTech Times, specializing in defense, regulations and technologies. She analyzes military innovations, cybersecurity threats, and geopolitical risks shaping national security. With a Master’s from Pune University, she closely tracks defense policies, sanctions, and enforcement actions. She is also a Certified Sanctions Screening Expert. Her work highlights regulatory challenges in defense technology and global security frameworks. Tejaswini provides sharp insights into emerging threats and compliance in the defense sector.

California has taken a major step toward making artificial intelligence (AI) safer for children. Governor Gavin Newsom has signed several new AI safety bills into law, focusing on protecting young users who interact with chatbots and digital companions. These measures are designed to prevent harmful or inappropriate content from reaching children who rely on AI for advice, companionship, or entertainment.

One of the most significant laws is Senate Bill 243 (SB 243), which requires companies that operate chatbots, including OpenAI and Character.AI, to implement key safety measures. Chatbots must avoid generating content that could encourage suicide, self-harm, or other dangerous behavior. If a user appears to be in distress, the chatbot must direct them to resources such as a suicide prevention hotline or crisis text line.

SB 243 also requires chatbots to remind minors every three hours to take a break and to clearly state that they are interacting with a machine, not a human. In addition, companies must take steps to prevent chatbots from producing sexual or explicit content for young users. These rules are intended to provide guardrails for children while allowing them to safely explore AI technology.

The law reflects growing concerns from lawmakers, including California Attorney General Rob Bonta, and parents who worry that chatbots could influence children’s mental health or encourage unhealthy emotional attachment. Recent lawsuits by parents against OpenAI, Character.AI, and Google over teen suicides highlight the seriousness of these concerns.

Kid Rock explodes over Ticketmaster lawsuit — drags Pearl Jam and country stars into the fight

Mixed Reactions from Tech Groups and Child Safety Advocates

While SB 243 has been praised for improving child safety, it has also faced criticism from both tech companies and advocacy groups. TechNet, a lobbying group representing major companies including OpenAI, Meta, and Google, said the rules are too broad. They argued that the definition of “companion chatbot” could cover too many types of AI, including virtual assistants and chatbots used in video games.

To address these concerns, exemptions were added for certain chatbots in video games and smart speakers. While these changes prevented overregulation, they also disappointed child safety groups. Organizations like Common Sense Media and Tech Oversight California, which initially supported SB 243, later withdrew their support, saying the bill included too many industry-friendly exceptions.

Despite differing opinions, California’s tech companies, including Character.AI and OpenAI, continue to roll out safety features to protect minors. Parents have urged lawmakers to act after tragic cases, emphasizing the need for AI platforms to provide clear guidance and notifications when children show signs of distress.

Colorado’s conversion therapy law in jeopardy as Supreme Court questions state authority

Governor Vetoes Stricter AI Safety Bill After Pushback

Alongside signing SB 243, Governor Newsom vetoed Assembly Bill 1064 (AB 1064), a stricter proposal that would have banned AI chatbots for minors unless they could be proven not to encourage harmful behavior. Supporters of AB 1064, including Common Sense Media and California Attorney General Rob Bonta, argued that AI companions could unintentionally give harmful advice or encourage self-harm or disordered eating.

In his veto message, Newsom said he agreed with the bill’s goal but warned it could unintentionally ban AI tools that are useful and educational for minors. He explained that preventing youth from using AI could limit their ability to learn about these technologies responsibly, at a time when AI is becoming central to work, learning, and daily life.

Tech companies also opposed AB 1064, claiming strict bans could harm innovation and place California firms at a disadvantage compared to competitors elsewhere. The veto demonstrates the challenge of balancing child safety with the state’s leadership in AI technology.

California’s new laws, including SB 243, represent a significant effort to protect children while maintaining the state’s role as a global leader in artificial intelligence. These measures set rules for safer AI use and aim to ensure that emerging technology can benefit young users without putting them at risk.

Latest

error: Content is protected !!