Technology Policy Brief #158 | Mindy Spatt | October 29, 2025

Summary

California styles itself as a leader in AI regulation.  Two landmark bills were signed by Governor Newsom this year over the usual industry objections.  But the bills don’t go as far as safety advocates wanted, and don’t offer sufficient protections to young, vulnerable users.

Analysis

Reporting on AI Risks- Senate Bill 53

A landmark bill with first-in-the-nation AI standards, Senate Bill 53 imposes new reporting requirements on large AI models (models trained on enormous and diverse datasets that include trillions of data points) and extends whistleblower protections.   It targets catastrophic risks, which the bill defines as “a foreseeable and material risk “that an AI model could:

  • Contribute to the death or serious injury of 50 or more people or cause at least $1 billion in damages;
  • Provide expert assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon;
  • Engage in criminal conduct or a cyberattack without meaningful human intervention; or
  • Evade the control of its developer or user.

A predecessor bill, SB 1047, would have applied to a wider range of AI systems, but Governor Newsom failed to sign it.  Critics have complained that SB 53’s focus specifically on large AI models is much too limited.  And SB 53’s exemption for smaller companies and start-ups is a blatant give-away to an industry Newsom sees as key to California’s prosperity.

Senate Bill 1074 was more focused on prevention than compliance.  It would have required AI models to meet safety certification standards before they went live.  Senate Bill 53 doesn’t have similar provisions; it only requires reporting on the models after they are put into use.  Independent third-party audits were also dropped in the new version of the bill.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Governor Newsom bragged in his signing statement.  But other proposed bills, like New York’s RAISE Act, set higher liability standards; harm must be a “probable consequence” of the model that could not have been “reasonably prevented.”  The New York State Legislature has approved the Responsible AI Safety and Education Act, and it is awaiting Governor Hochul’s signature.  It applies more broadly to “large developers” that spend over $100 million in training costs and requires companies not to use AI models that present …” unreasonable risks of critical harm” until the risks are mitigated.

Deadly Chatbots- Senate Bill 243

Senate Bill 824 is one of the first attempts in the nation to regulate chatbots, but child safety advocates say it doesn’t go far enough..  Under SB 243, companies that offer chatbots, such as OpenAI’s ChatGPT, would be required to add safeguards that would monitor conversations for signs of suicidal ideation and take steps to prevent self-harm, such as referrals to mental health services.  ChatGPT is currently being sued for wrongful death in a case where a teen boy’s relationship with a chatbot allegedly drove him to suicide.

Makers of the chatbots will be required to add reminders to users of the artificial nature of the chats, and kids using the bots will also get reminders to take breaks.  Companies would have to take steps to prevent children from being exposed to sexually explicit content through Chatbots.  Meta, Facebook’s parent company, faced outrage from parents after a leaked copy of its chatbot rules revealed that the company’s bots were allowed to have “sensual” conversations with children.

Child safety advocates from Tech Oversight and Common Sense Media had supported a stronger bill, AB 1064, that would have prevented children from using “companion” chatbots unless companies met specific safety thresholds.   Common Sense Media, whose CEO is Jim Steyer, a brother of billionaire climate activist Tom Steyer, and former Biden administration U.S. Surgeon General Vivek Murthy recently announced they will file a ballot initiative in California to rein in the use of artificial intelligence chatbots by young people and hold big tech companies accountable for any harms their products cause.

Called “The California Kids AI Safety Act,” the initiative would establish guardrails for companion chatbots, ban cellphones in classrooms, prohibit the sale of children’s data, require regular independent safety audits, and provide for education in AI literacy and safety.

In announcing the move, Steyer and Murthy specifically pointed to the multiple teenagers who died by suicide after using chatbots. The family of one of those teenagers, Adam Raine, had directly urged Newsom to sign the stronger chatbot bill he vetoed.  Raine’s death was also referenced in a Statement by Commissioner Melissa Holyoak in announcing a Federal Trade Commission investigation into Chatbots, linked below.

ENGAGEMENT RESOURCES

 

DONATE NOW
Subscribe Below to Our News Service

x
x
Support fearless journalism! Your contribution, big or small, dismantles corruption and sparks meaningful change. As an independent outlet, we rely on readers like you to champion the cause of transparent and accountable governance. Every donation fuels our mission for insightful policy reporting, a cornerstone for informed citizenship. Help safeguard democracy from tyrants—donate today. Your generosity fosters hope for a just and equitable society.

Pin It on Pinterest

Share This