CommunityNews

CommunityNews

California governor signs AI transparency bill into law

Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry

What you need to know: Governor Newsom today signed legislation further establishing California as a world leader in safe, secure, and trustworthy artificial intelligence, creating a new law that helps the state both boost innovation and protect public safety.

SACRAMENTO — Governor Newsom today signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), authored by Senator Scott Wiener (D-San Francisco) – legislation carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies. The new law builds on recommendations from California’s first-in-the-nation report, called for by Governor Newsom and published earlier this year — and helps advance California’s position as a national leader in responsible and ethical AI, the world’s fourth-largest economy, the birthplace of new technology, and the top pipeline for tech talent.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves.”

Governor Gavin Newsom

California works closely to foster tech leadership and create an environment where industry and talent thrive. The state is balancing its work to advance AI with commonsense laws to protect the public, embracing the technology to make our lives easier and make government more efficient, effective, and transparent. California’s leadership in the AI industry is helping to guide the world in the responsible implementation and use of this emerging technology.

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk. With this law, California is stepping up, once again, as a global leader on both technology innovation and safety. I’m grateful to the Governor for his leadership in convening the Joint California AI Policy Working Group, working with us to refine the legislation, and now signing it into law. His Administration’s partnership helped this groundbreaking legislation promote innovation and establish guardrails for trust, fairness, and accountability in the most remarkable new technology in many years.”

Senator Scott Wiener

Earlier this year, a group of world-leading AI academics and experts — convened at the request of Governor Newsom — released a first-in-the-nation report on sensible AI guardrails, based on an empirical, science-based analysis of the capabilities and attendant risks of frontier models. The report included recommendations on ensuring evidence-based policymaking, balancing the need for transparency with considerations such as security risks, and determining the appropriate level of regulation in this fast-evolving field. SB 53 is responsive to the recommendations in the report — and will help ensure California’s position as an AI leader. This legislation is particularly important given the failure of the federal government to enact comprehensive, sensible AI policy. SB 53 fills this gap and presents a model for the nation to follow.

“Last year Governor Newsom called upon us to study how California should properly approach frontier artificial intelligence development. The Transparency in Frontier Artificial Intelligence Act (TFAIA) moves us towards the transparency and ‘trust but verify’ policy principles outlined in our report. As artificial intelligence continues its long journey of development, more frontier breakthroughs will occur. AI policy should continue emphasizing thoughtful scientific review and keeping America at the forefront of technology.”

Mariano-Florentino (Tino) Cuéllar
Former California Supreme Court Justice and former member of National Academy of Sciences Committee on the Social and Ethical Implications of Computing Research

Dr. Fei-Fei Li
Co-Director, Stanford Institute for Human-Centered Artificial Intelligence

Jennifer Tour Chayes
Dean of the College of Computing, Data Science, and Society at UC Berkeley

California’s AI dominance

California continues to dominate the AI sector. In addition to being the birthplace of AI, the state is home to 32 of the 50 top AI companies worldwide. California leads U.S. demand for AI talent. In 2024, 15.7% of all U.S. AI job postings were in California — #1 by state, well ahead of Texas (8.8% and New York (5.8%), per the 2025 Stanford AI Index. In 2024, more than half of global VC funding for AI and machine learning startups went to companies in the Bay Area. California is also home to three of the four companies that have passed the $3 trillion valuation mark. Each of these California-based companies — Google, Apple, and Nvidia — are tech companies involved in AI and have created hundreds of thousands of jobs.

What the law does:

SB 53 establishes new requirements for frontier AI developers creating stronger:

:white_check_mark: Transparency: Requires large frontier developers to publicly publish a framework on its website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.

:white_check_mark: Innovation: Establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster. The consortium, called CalCompute, will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation.

:white_check_mark: Safety: Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.

:white_check_mark: Accountability: Protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.

:white_check_mark: Responsiveness: Directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments, and international standards.

Read in full here:

Where Next?

Popular Ai topics Top

First poster: CommunityNews
SOME OF THE most dazzling recent advances in artificial intelligence have come thanks to resources only available at big tech companies, ...
New
First poster: CommunityNews
Imagine you’re sitting at a casino’s poker table. Someone has explained the basic rules to you, but you’ve never played before and don’t ...
New
First poster: CommunityNews
Many recent big advances in tech have one key thing at the heart of then: artificial intelligence.
New
First poster: bot
DeepMind’s AI helps untangle the mathematics of knots. The machine-learning techniques could benefit other areas of maths that involve l...
New
First poster: bot
AI Is Discovering Patterns in Pure Mathematics That Have Never Been Seen Before. We can add suggesting and proving mathematical theorems...
New
First poster: bot
A research group has taught AI to magnetically wrangle a high-powered stream of plasma used for fusion research — but wait! Put away your...
New
First poster: bot
Building games and apps entirely through natural language using OpenAI’s code-davinci model. TL;DR: OpenAI has a new code generating mod...
New
CommunityNews
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understandin...
New
CommunityNews
Openly available AI tool creates steerable 3D-like video, but requires serious GPU muscle.
New
CommunityNews
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language...
New

Other popular topics Top

AstonJ
Or looking forward to? :nerd_face:
483 11078 254
New
DevotionGeo
I know that -t flag is used along with -i flag for getting an interactive shell. But I cannot digest what the man page for docker run com...
New
PragmaticBookshelf
Rust is an exciting new programming language combining the power of C with memory safety, fearless concurrency, and productivity boosters...
New
AstonJ
This looks like a stunning keycap set :orange_heart: A LEGENDARY KEYBOARD LIVES ON When you bought an Apple Macintosh computer in the e...
New
AstonJ
In case anyone else is wondering why Ruby 3 doesn’t show when you do asdf list-all ruby :man_facepalming: do this first: asdf plugin-upd...
New
PragmaticBookshelf
Learn different ways of writing concurrent code in Elixir and increase your application's performance, without sacrificing scalability or...
New
New
PragmaticBookshelf
Develop, deploy, and debug BEAM applications using BEAMOps: a new paradigm that focuses on scalability, fault tolerance, and owning each ...
New
sir.laksmana_wenk
I’m able to do the “artistic” part of game-development; character designing/modeling, music, environment modeling, etc. However, I don’t...
New
PragmaticBookshelf
Fight complexity and reclaim the original spirit of agility by learning to simplify how you develop software. The result: a more humane a...
New