In relation to AI, as California goes, so goes the nation. The largest state within the US by inhabitants can also be the central hub of AI innovation for your entire globe, house to 32 of the world’s high 50 AI corporations. That dimension and affect have given the Golden State the burden to change into a regulatory trailblazer, setting the tone for the remainder of the nation on environmental, labor, and shopper safety laws — and extra not too long ago, AI as nicely.
Now, following the dramatic defeat of a proposed federal moratorium on states regulating AI in July, California policymakers see a restricted window of alternative to set the stage for the remainder of the nation’s AI legal guidelines. Within the early hours of Saturday morning, the California State Meeting voted in favor of SB 53, a invoice that may require transparency stories from the builders of extremely highly effective, “frontier” AI fashions. The invoice, which has handed each components of the state legislature, now goes to Gov. Gavin Newsom to both be vetoed or signed into legislation.
The fashions focused symbolize the cutting-edge of AI — extraordinarily adept generative methods that require large quantities of knowledge and computing energy, like OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, and Anthropic’s Claude.
AI can supply super advantages, however because the invoice is supposed to deal with, it’s not with out dangers. And whereas there isn’t a scarcity of present dangers from points like job displacement and bias, SB 53 focuses on attainable “catastrophic dangers” from AI. Such dangers embrace AI-enabled organic weapons assaults and rogue methods finishing up cyberattacks or different felony exercise that would conceivably convey down crucial infrastructure. Such catastrophic dangers symbolize widespread disasters that would plausibly threaten human civilization at native, nationwide, and world ranges. They symbolize dangers of the sort of AI-driven disasters that haven’t but occurred, quite than already-realized, extra private harms like AI deepfakes.
Precisely what constitutes a catastrophic threat is up for debate, however SB 53 defines it as a “foreseeable and materials threat” of an occasion that causes greater than 50 casualties or over $1 billion in damages {that a} frontier mannequin performs a significant function in contributing to. How fault is decided in observe could be as much as the courts to interpret. It’s onerous to outline catastrophic threat in legislation when the definition is much from settled, however doing so may also help us defend in opposition to each near- and long-term penalties.
By itself, a single state invoice targeted on elevated transparency will in all probability not be sufficient to forestall devastating cyberattacks and AI-enabled chemical, organic, radiological, and nuclear weapons. However the invoice represents an effort to manage this fast-moving expertise earlier than it outpaces our efforts at oversight.
SB 53 is the third state-level invoice to attempt to particularly give attention to regulating AI’s catastrophic dangers, after California’s SB 1047, which handed the legislature solely to be vetoed by the governor — and New York’s Accountable AI Security and Training (RAISE) Act, which not too long ago handed the New York legislature and is now awaiting Gov. Kathy Hochul’s approval.
SB 53, which was launched by state Sen. Scott Wiener in February, requires frontier AI corporations to develop security frameworks that particularly element how they strategy catastrophic threat discount. Earlier than deploying their fashions, corporations must publish security and safety stories. The invoice additionally offers them 15 days to report “crucial security incidents” to the California Workplace of Emergency Companies, and establishes whistleblower protections for workers who come ahead about unsafe mannequin deployment that contributes to catastrophic threat. SB 53 goals to carry corporations publicly accountable for his or her AI security commitments, with a monetary penalty as much as $1 million per violation.
“The science of make AI protected is quickly evolving, and it’s at present troublesome for policymakers to write down prescriptive technical guidelines for the way corporations ought to handle security.”
— Thomas Woodside, co-founder of Safe AI Venture
In some ways, SB 53 is the non secular successor to SB 1047, additionally launched by Wiener.
Each cowl massive fashions which might be skilled at 10^26 FLOPS, a measurement of very vital computing energy used in quite a lot of AI laws as a threshold for vital threat, and each payments strengthen whistleblower protections. The place SB 53 departs from SB 1047 is its give attention to transparency and prevention
Whereas SB 1047 aimed to maintain corporations accountable for catastrophic harms brought on by their AI methods, SB 53 formalizes sharing security frameworks, which many frontier AI corporations, together with Anthropic, already do voluntarily. It focuses squarely on the heavy-hitters, with its guidelines making use of solely to corporations that generate $500 million or extra in gross income.
“The science of make AI protected is quickly evolving, and it’s at present troublesome for policymakers to write down prescriptive technical guidelines for the way corporations ought to handle security,” stated Thomas Woodside, the co-founder of Safe AI Venture, an advocacy group that goals to cut back excessive dangers from AI and is a sponsor of the invoice, over e mail. “This gentle contact coverage prevents backsliding on commitments and encourages a race to the highest quite than a race to the underside.”
A part of the logic of SB 53 is the power to adapt the framework as AI progresses. The invoice authorizes the California Legal professional Basic to alter the definition of a giant developer after January 1, 2027, in response to AI advances.
Proponents of the invoice had been optimistic about its possibilities of being signed by the governor ought to it move the legislature. On the identical day that Gov. Newsom vetoed SB 1047, he commissioned a working group focusing solely on frontier fashions. The ensuing report by the group offered the inspiration for SB 53. “I’d guess, with roughly 75 % confidence, that SB 53 shall be signed into legislation by the tip of September,” stated Dean Ball — former White Home AI coverage adviser, vocal SB 1047 critic, and SB 53 supporter — to Transformer.
However a number of trade organizations rallied in opposition, arguing that further compliance regulation could be costly, on condition that AI corporations ought to already be incentivized to keep away from catastrophic harms. OpenAI has lobbied in opposition to it, and expertise commerce group Chamber of Progress argues that the invoice would require corporations to file pointless paperwork and unnecessarily stifle innovation.
“These compliance prices are merely the start,” Neil Chilson, head of AI coverage on the Abundance Institute, instructed me over e mail. “The invoice, if handed, would feed California regulators truckloads of firm info that they are going to use to design a compliance industrial advanced.”
Against this, Anthropic enthusiastically endorsed the invoice on Monday. “The query isn’t whether or not we want AI governance – it’s whether or not we develop it thoughtfully as we speak or reactively tomorrow,” the corporate defined in a weblog submit. “SB 53 gives a stable path towards the previous.” (Disclosure: Vox Media is one in all a number of publishers which have signed partnership agreements with OpenAI, whereas Future Excellent is funded partly by the BEMC Basis, whose main funder was additionally an early investor in Anthropic. Neither group has editorial enter into our content material.)
The talk over SB 53 ties into broader disagreements about whether or not states or the federal authorities ought to drive AI security regulation. However for the reason that overwhelming majority of those corporations are primarily based in California, and practically all do enterprise there, the state’s laws issues for your entire nation.
“A federally led transparency strategy is much, far, far preferable to the multi-state various,” the place a patchwork of state laws can battle with one another, stated Cato Institute expertise coverage fellow Matthew Mittelsteadt in an e mail. However “I really like that the invoice has a provision that may permit corporations to defer to a future various federal customary.”
“The pure query is whether or not a federal strategy may even occur,” Mittelsteadt continued. “For my part, the jury is out on that however the chance is much extra possible that some recommend. It’s been lower than 3 years since ChatGPT was launched. That’s hardly a lifetime in public coverage.”
However in a time of federal gridlock, frontier AI developments received’t watch for Washington.
The catastrophic threat divide
The invoice’s give attention to, and framing of, catastrophic dangers isn’t with out controversy.
The concept of catastrophic threat comes from the fields of philosophy and quantitative threat evaluation. Catastrophic dangers are downstream of existential dangers, which threaten humanity’s precise survival or else completely scale back our potential as a species. The hope is that if these doomsday situations are recognized and ready for, they are often prevented or no less than mitigated.
But when existential dangers are clear — the tip of the world, or no less than as we all know it — what falls beneath the catastrophic threat umbrella, and the easiest way to prioritize these dangers, will depend on who you ask. There are longtermists, folks targeted totally on humanity’s far future, who place a premium on issues like multiplanetary growth for human survival. They’re typically mainly involved by dangers from rogue AI or extraordinarily deadly pandemics. Neartermists are extra preoccupied with present dangers, like local weather change, mosquito vector-borne illness, or algorithmic bias. These camps can mix into each other — neartermists would additionally prefer to keep away from getting hit by asteroids that would wipe out a metropolis, and longtermists don’t dismiss dangers like local weather change — and the easiest way to think about them is like two ends of a spectrum quite than a strict binary.
You possibly can consider the AI ethics and AI security frameworks because the near- and longtermism of AI threat, respectively. AI ethics is in regards to the ethical implications of the methods the expertise is deployed, together with issues like algorithmic bias and human rights, within the current. AI security focuses on catastrophic dangers and potential existential threats. However, as Vox’s Julia Longoria reported within the Good Robotic sequence for Unexplainable, there are inter-personal conflicts main these two factions to work in opposition to one another, a lot of which has to do with emphasis. (AI ethics folks argue that catastrophic threat considerations over-hype AI capabilities and ignores its impression on weak folks proper now, whereas AI security folks fear that if we focus an excessive amount of on the current, we received’t have methods to mitigate larger-scale issues down the road.)
However behind the query of close to versus long-term dangers lies one other one: what, precisely, constitutes a catastrophic threat?
SB 53 initially set the usual for catastrophic threat at 100 quite than 50 casualties — just like New York’s RAISE Act — earlier than halving the brink in an modification to the invoice. Whereas the common particular person would possibly contemplate, say, many individuals pushed to suicide after interacting with AI chatbots to be catastrophic, such a threat is exterior of the invoice’s scope. (The California State Meeting simply handed a separate invoice to manage AI companion chatbots by stopping them from taking part in discussions about suicidal ideation or sexually express materials.)
SB 53 focuses squarely on harms from “expert-level” frontier AI mannequin help in creating or deploying chemical, organic, radiological, and nuclear weapons; committing crimes like cyberattacks or fraud; and “lack of management” situations the place AIs go rogue, behaving deceptively to keep away from being shut down and replicating themselves with out human oversight. For instance, an AI mannequin could possibly be used to information the creation of a brand new lethal virus that infects tens of millions and kneecaps the worldwide financial system.
“The 50 to 100 deaths or a billion {dollars} in property injury is only a proxy to seize actually widespread and substantial impression,” stated Scott Singer, lead writer of the California Report for Frontier AI Coverage, which helped inform the idea of the invoice. “We do take a look at like AI-enabled or AI doubtlessly [caused] or correlated suicide. I believe that’s like a really severe set of points that calls for policymaker consideration, however I don’t suppose it’s the core of what this invoice is attempting to deal with.”
Transparency is useful in stopping such catastrophes as a result of it might probably assist elevate the alarm earlier than issues get out of hand, permitting AI builders to right course. And within the occasion that such efforts fail to forestall a mass casualty incident, enhanced security transparency may also help legislation enforcement and the courts determine what went mistaken. The problem there may be that it may be troublesome to find out how a lot a mannequin is accountable for a selected final result, Irene Solaiman, the chief coverage officer at Hugging Face, a collaboration platform for AI builders, instructed me over e mail.
“These dangers are coming and we ought to be prepared for them and have transparency into what the businesses are doing,” stated Adam Billen, the vp of public coverage at Encode, a corporation that advocates for accountable AI management and security. (Encode is one other sponsor of SB 53.) “However we don’t know precisely what we’re going to want to do as soon as the dangers themselves seem. However proper now, when these issues aren’t taking place at a big scale, it is smart to be kind of targeted on transparency.”
Nonetheless, a transparency-focused invoice like SB 53 is inadequate for addressing already-existing harms. After we already know one thing is an issue, the main target ought to be on mitigating it.
“Possibly 4 years in the past, if we had handed some kind of transparency laws like SB 53 however targeted on these harms, we would have had some warning indicators and been capable of intervene earlier than the widespread harms to children began taking place,” Billen stated. “We’re attempting to sort of right that mistake on these issues and get some kind of forward-facing details about what’s taking place earlier than issues get loopy, principally.”
SB 53 dangers being each overly slim and unclearly scoped. Now we have not but confronted these catastrophic harms from frontier AI fashions, and essentially the most devastating dangers would possibly take us completely abruptly. We don’t know what we don’t know.
It’s additionally definitely attainable that fashions skilled under 10^26 FLOPS, which aren’t lined by SB 53, have the potential to trigger catastrophic hurt beneath the invoice’s definition. The EU AI Act units the threshold for “systemic threat” on the smaller 10^25 FLOPS, and there’s disagreement in regards to the utility of computational energy as a regulatory customary in any respect, particularly as fashions change into extra environment friendly.
Because it stands proper now, SB 53 occupies a special area of interest from payments targeted on regulating AI use in psychological healthcare or information privateness, reflecting its authors’ need to not step on the toes of different laws or chunk off greater than it might probably moderately chew. However Chilson, the Abundance Institute’s head of AI coverage, is a part of a camp that sees SB 53’s give attention to catastrophic hurt as a “distraction” from the true near-term advantages and considerations, like AI’s potential to speed up the tempo of scientific analysis or create nonconsensual deepfake imagery, respectively.
That stated, deepfakes may definitely trigger catastrophic hurt. For example, think about a hyper-realistic deepfake impersonating a financial institution worker to commit fraud at a multibillion-dollar scale, stated Nathan Calvin, the vp of state affairs and normal counsel at Encode. “I do suppose a number of the strains between this stuff in observe is usually a bit blurry, and I believe in some methods…that isn’t essentially a nasty factor,” he instructed me.
It could possibly be that the ideological debate round what qualifies as catastrophic dangers, and whether or not that’s worthy of our legislative consideration, is simply noise. The invoice is meant to manage AI earlier than the proverbial horse is out of the barn. The common particular person isn’t going to fret in regards to the probability of AI sparking nuclear warfare or organic weapons assaults, however they do take into consideration how algorithmic bias would possibly have an effect on their lives within the current. However in attempting to forestall the worst-case situations, maybe we will additionally keep away from the “smaller,” nearer harms. In the event that they’re efficient, forward-facing security provisions designed to forestall mass casualty occasions will even make AI safer for people.
If Gov. Newsom indicators SB 53 into legislation, it may encourage different state makes an attempt at AI regulation by an identical framework, and ultimately encourage federal AI security laws to maneuver ahead.
How we take into consideration threat issues as a result of it determines the place we focus our efforts on prevention. I’m a agency believer within the worth of defining your phrases, in legislation and debate. If we’re not on the identical web page about what we imply after we speak about threat, we will’t have an actual dialog.
Replace, September 13, 2025, 11:55 am ET: This story was initially revealed on September 12 and has been up to date to mirror the result of the California State Meeting vote.