Close Menu
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
What's Hot

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

April 28, 2026

Microsoft cuts OpenAI revenue share as their AI alliance loosens

April 28, 2026
Facebook X (Twitter) Instagram
Trending
  • Investors trust Google more than Meta when comes to spending on AI
  • Paragon is not collaborating with Italian authorities probing spyware attacks, report says
  • Microsoft cuts OpenAI revenue share as their AI alliance loosens
  • Robotically assembled building blocks could make construction more efficient and sustainable | MIT News
  • AI showdown: Musk and Altman go to trial in fight over OpenAI’s beginnings
  • U.S., Iran seize ships as war evolves into standoff over Strait of Hormuz
  • Google launches training and inference TPUs in latest shot at Nvidia
  • Zoom teams up with World to verify humans in meetings
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
Facebook X (Twitter) Instagram
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech InnovationsRoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Saturday, May 16
  • Home
  • AI
  • Crypto
  • Cybersecurity
  • IT
  • Energy
  • Robotics
  • TechCrunch
  • Technology
RoboNewsWire – Latest Insights on AI, Robotics, Crypto and Tech Innovations
Home » California lawmaker behind SB 1047 reignites push for mandated AI safety reports

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

GTBy GTJuly 11, 2025 TechCrunch No Comments5 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world’s largest AI companies to publish safety and security protocols and issue reports when safety incidents occur.

If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI.

Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California’s governor then called for a group of AI leaders — including the leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to form a policy group and set goals for the state’s AI safety efforts.

California’s AI policy group recently published their final recommendations, citing a need for “requirements on industry to publish information about their systems” in order to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that SB 53’s amendments were heavily influenced by this report.

“The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,” Senator Wiener said in the release.

SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California’s AI industry.

“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.”

The bill also creates whistleblower protections for employees of AI labs who believe their company’s technology poses a “critical risk” to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage.

Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.

Unlike SB 1047, Senator Wiener’s new bill does not make AI model developers liable for the harms of their AI models. SB 53 was also designed not to pose a burden on startups and researchers that fine-tune AI models from leading AI developers, or use open source models.

With the new amendments, SB 53 is now headed to the California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom’s desk.

On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports.

The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a “patchwork” of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July.

“Ensuring AI is developed safely should not be controversial — it should be foundational,” said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California’s AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts.

Leading AI model developers typically publish safety reports for their AI models, but they’ve been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models.

SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they’ll be watching closely as Senator Wiener once again tests those boundaries.



Source link

GT
  • Website

Keep Reading

Paragon is not collaborating with Italian authorities probing spyware attacks, report says

Zoom teams up with World to verify humans in meetings

Hackers are abusing unpatched Windows security flaws to hack into organizations

‘Tokenmaxxing’ is making developers less productive than they think

Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

Kevin Weil and Bill Peebles exit OpenAI as company continues to shed ‘side quests’

Add A Comment
Leave A Reply Cancel Reply

Editors Picks

Investors trust Google more than Meta when comes to spending on AI

April 30, 2026

Google launches training and inference TPUs in latest shot at Nvidia

April 27, 2026

Meta tracks employee usage on Google, LinkedIn AI training project

April 25, 2026

Meta will cut 10% of workforce as company pushes deeper into AI

April 24, 2026
Latest Posts

Malicious Chrome Extension Steal ChatGPT and DeepSeek Conversations from 900K Users

April 1, 2026

Top 10 Best Server Monitoring Tools

April 1, 2026

10 Best Cybersecurity Risk Management Tools

March 31, 2026

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to RoboNewsWire, your trusted source for cutting-edge news and insights in the world of technology. We are dedicated to providing timely and accurate information on the most important trends shaping the future across multiple sectors. Our mission is to keep you informed and ahead of the curve with deep dives, expert analysis, and the latest updates in key industries that are transforming the world.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 Robonewswire. Designed by robonewswire.

Type above and press Enter to search. Press Esc to cancel.