California Lawmakers Face Backlash Over Doomsday-Driven AI Bill
The bill could have unintended consequences that reach far beyond California, affecting the entire nation.

This month, the California State Assembly is set to vote on Senate Bill 1047, legislation that could significantly disrupt AI research. If history is any guide, California could once again drag the entire country towards unpopular tech regulation. Yet the bill's sponsors have been caught off guard by the backlash from academics and the AI industry. "I did not appreciate how toxic the division is," admitted state Sen. Scott Wiener (D–San Francisco), who introduced the bill.
Academics, open-source developers, and companies of all sizes are waking up to the threat this bill poses to their future. S.B. 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, mandates extensive safety protocols and imposes harsh liabilities on developers for the potential misuse of their AI models.
Fei-Fei Li, often hailed as the "Godmother of AI," warned that "S.B.-1047 will cripple public sector and academic AI research. Open-source development is important in the private sector, but vital to academia, which cannot advance without collaboration and access to model data." Echoing her concerns, Ion Stoica, director of the University of California, Berkeley's Sky Computing Lab, argued that the bill "will hurt innovation in California, and it will result in a more dangerous, less safe world."
Tech entrepreneur Andrew Ng pointed out that "this proposed law makes a fundamental mistake of regulating AI technology instead of AI applications, and thus would fail to make AI meaningfully safer."
The backlash isn't limited to academics and large institutions. Hundreds of "little tech" startups are also voicing their concerns. Leading the resistance, startup incubator Y Combinator and venture capital firm Andreessen Horowitz have each gathered letters of opposition, signed by over a hundred startups.
Even lawmakers, including Rep. Ro Khanna (D–Calif.), Rep. Zoe Lofgren (D–Calif.), and former San Francisco interim mayor Mark Farrell have publicly opposed Wiener's bill.
With such widespread pushback, it raises the question: How was Wiener caught so off guard?
Within six months of Wiener introducing S.B. 1047, a wave of legislators and regulators changed their minds as they caught up to the available evidence. Lawmakers of different parties, factions, and countries began to adopt a more optimistic tone.
The new Republican platform urged the party to "support AI Development rooted in Free Speech and Human Flourishing." Demonstrating the real-world benefits of AI, Rep. Jennifer Wexton (D–Va.) used technology from Eleven Labs to restore her voice after losing it to a neurological condition. Even the Federal Trade Commission voiced support for open-source AI.
This shift in sentiment was a loud rebuttal to the AI Safety movement, which has been heavily backed by billionaires like Sam Bankman-Fried and Dustin Moskovitz, as well as multimillionaire Jaan Tallinn. With hundreds of millions of dollars behind it, the movement has pushed to pause AI research, driven by the doomsday prediction that sufficiently advanced AI could lead to human extinction.
While mainstream experts focused on their work, lawmakers were swayed by one-sided narratives from the Open Philanthropy-funded Center for AI Safety (CAIS), which claims that "mitigating the risk of extinction from AI should be a global priority." Until recently, policymakers remained largely unaware of just how disconnected these narratives were from the broader AI community.
As federal legislators develop a more grounded understanding of AI, they are gravitating toward evidence-based regulation. This shift led economist Tyler Cowen to declare that "AI safety is dead" in a Bloomberg editorial following Senate Majority Leader Chuck Schumer's unveiling of a bipartisan Senate roadmap that aims to invest in AI while addressing AI misuse. "Schumer's project suggests that the federal government is more interested in accelerating AI than hindering it," Cowen wrote.
The AI Safety movement initially gained traction by exploiting a political vacuum. Mainstream academics and industry had little reason to engage in legislative outreach before the introduction of S.B. 1047. Many were shocked that lawmakers would seriously consider the claims made by CAIS and Wiener. "I'm surprised to see [S.B. 1047] being seriously discussed in the California legislature," remarked Ethan Fast from VCreate, a company using machine learning to treat diseases. AI researcher Kenneth O. Stanley offered a sharper critique: "It almost seems like science fiction to see an actual bill like this. A more useful bill would be narrower in scope and focus on addressing specific near-term harms."
Legislators across the country are now turning to an evidence-based framework for AI regulation, focusing on targeting malicious actors who misuse AI rather than researchers themselves. As Ng points out, S.B. 1047 "ignores the reality that the number of beneficial uses of AI models is, like electric motors, vastly greater than the number of harmful ones." Even as most of the country moved beyond the AI Safety narrative, the possibility that California radicals could dictate the nation's future remains a haunting prospect.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
""The bill could have unintended consequences that reach far beyond California, affecting the entire nation.""
CA sees that as an upside.
Of course, the only reason that might be remotely true is because those companies refuse to leave California despite California's overt hostility towards their company.
Really, this is a question?
It raises the better question of what kind of naive novice is Brian Chau?
Oh, some bureaucrat.
Well, that's probably a little unfair for a rhetorical question. But anyone with half a brain knows the answer: politicians go off half-cocked all the time. Knee-jerk reactions are all they know, just another sign that they have far too much free time and need to be cut way way back. How about California does like Texas, which I think has a single 90 day session every two years, or something like that. California used to.
Bureaucracies expand because bureaucrats have no way to measure their success except expanding budgets, employee counts, and pages of regulations. They're like the Red Queen: if they don't keep expanding, voters will figure out how idle and useless they are and get rid of them.
You want a new law? Gotta get rid of two.
And they would replace two small, simple laws with a monstrosity of an omnibus bill.
If they want to pass a new million word bill, they have to get rid of a million words worth of old laws.
I "solved" that problem partly by requiring 2/3 majority in all chambers to pass new bills, but only 1/2 majority in any single chamber to repeal laws.
Methed up Bureaucrats who had watched Caprica together.
I liked Caprica. It had… potential.
Lots of exploring of concepts of virtual reality, post-life intelligence, and even gaming tropes of the area like the exclusive section where if your avatar is killed you can’t come back anymore to up the ante.
Had the “the corporations” as the bad guys ridiculous trope of its era, but it also explored the fallout from stupid radicalized kids turning to terrorism.
Could have been an interesting show for a few seasons if they didn’t water down their writer pool. Alas, cancelled.
So AI bad and now we’re getting Cylons because of Weiner (the man who also says we should have speed limit sensors on all our cars). IF anyone should be blown up on a train like the poor girl in Caprica it should be him… but it’s California and we won’t have that high speed rail link running until he’s died of old age.
"I did not appreciate how toxic the division is," admitted state Sen. Scott Wiener (D–San Francisco), who introduced the bill.
From the story, it appears the regulation was created in a vacuum. If Wiener bothered with gathering inputs from all sides before putting a government jackboot on AI development, there may not have been such a toxic response.
Why would someone who is supposed to be representing his constituency, ask common folk for input when he can just use his superiority to rule over them?
Wiener is notorious for sticking himself where he doesn't belong, no matter how painful it is for the rest of us.
Politicians want to control anything that might be smarter than they are. Look out, anything brighter than a not so clever chicken.
Fair notice to all startups: Don't do it in California.
Fair notice to all venture capitalists: Don't fund anyone in California.
Fair notice to everyone in California: Get out while you still can.
Is it really a good idea to attach Bankman-Fried's name to anything you're in favor of?
I think he attached it because that Bankman fried fucktard was on the OPPOSITE side of the issue from this guy.
Sen. Tiny Wiener thinks it can keep skynet in a box.
This is a very disingenuous article, which is what I've come to expect of Mr. Chau. Two examples in which his bias stands out:
- YCombinator and Andressen Horowitz are described as "leading the resistance", while the AI safety movement is described as being "backed by billionaires". It's pretty clear that rich people have varying opinions, and certainly opponents of SB 1047 can't be said to be under-resourced.
- Chau mentions academics opposed to the bill but doesn't mention the fact that there's also support from academics. The director of CAIS Dan Hendrycks is one of the most prolific young AI researchers, and Yoshua Bengio and Geoff Hinton support the bill.
In Chau's framing, SB 1047 arose from the ignorance of California legislators toward AI technology. In truth, the bill arose from within the AI community, among smart people who understand the technology and see AI caused catastrophe as a real possibility.
> "policymakers remained largely unaware of just how disconnected these narratives were from the broader AI community"
The truth here is that the broader AI community is disconnected from itself. AI researchers are vastly divided on how soon human level AGI will be developed, and what its impact will be. Some researchers think that any talk of "rogue AI" is absolute poppycock, while others think that AI-related disaster is quite likely. This is best illustrated by the division between the three "godfathers of AI": LeCun thinks that talk of doom is ridiculous, Bengio thinks that "rogue AI" and other is a real but solvable problem that we should put resources towards solving, and Hinton thinks that human-level AI is imminent and catastrophic results for humans are hard to avoid (he's said 50-50 in past). The median AI researcher puts the likelihood of "human extinction or similarly permanent and severe disempowerment" caused by AI at 5% (from the AI Impacts survey).
If you feel like this talk of extinction seems detached from common-sense, I would recommend reading Bengio's article "Reasoning through arguments against taking AI safety seriously". The debate here is a deep factual one that's easy to miss if you don't read past how the news media frames the AI issue.
It's hard to know how to act under such uncertainty, and I'm not sure whether SB 1047 was the right law. But, Chau's proposal of "do-nothing and plow full steam ahead" seems like a poor option to me. Ideally, the AI community would be working together to figure out the way to avert the worst possibilities, and agree on the conditions which might indicate whether or not the rogue AI concern is overblown. Unfortunately, people like Chau seem more interested sowing further division.
The truth here is that this is none of the government's business. All this palaver over whether government should be pro or anti neglects the MYOB position.
MYOB works if your own business is not imposing significant externalities on the rest of the world. The concern about rogue AI is that the creation and deployment of advanced AI models (not current models, but future models) could come with the risk that we lose control of the systems and they start autonomously trying to gain power or kill humans.
If my neighbor is engineering a biological virus in a lab that could cause a pandemic without safeguards to prevent it from being released, he can’t rightly say that I should mind my own business if I try stop him. The argument is that autonomous, human-level AGI (the explicit goal of some AI companies) is potentially dangerous is similar way; it’s dangerous to people beyond just its creators and users.
There’s a separate question of whether government intervention would help, and whether our institutions are capable of making the laws that would help.
Prove it first. Don't shoot and ask questions later.
"MYOB works if your own business is not imposing significant externalities on the rest of the world. The concern about rogue AI is that the creation and deployment of advanced AI models (not current models, but future models) could come with the risk that we lose control of the systems and they start autonomously trying to gain power or kill humans...."
Show us your crystal ball which is 100% accurate, or fuck off and die.
Sorry, had forgotten the name:
The “Precautionary Principle”: Don’t do ANYTHING until you can prove it to be 100% harmless!
Please go back to hunting and gathering, leaving us civilized folks alone.
AI researchers are vastly divided on how soon human level AGI will be developed,
Never. AI can't know fear so it can't empathize. That means it can only emulate human decision making and there are far too many bad examples to learn from.
You teach an AI to know fear and it is no longer artificial. Fear is what it means to be alive.
In Chau’s framing, SB 1047 arose from the ignorance of California legislators toward AI technology. In truth, the bill arose from within the AI community, among smart people who understand the technology and see AI caused catastrophe as a real possibility.
I'm hovering around the edges of this debate, but I think this can't be understated. When Reason immediately started swiveling its hips about AI and talking about how the AI companies were all freewheeling cowboys on the frontier of moving fast and breaking things, I pointed out that Sam Altman and various other leaders in AI were quick to point out that AI did have dangers, and you should be skeptical of all the fly-by night AI companies and focus on theirs, because they and they alone were putting in the proper guardrails and safety systems that others weren't.
We always like to look at these things as if a legislator sat in his officer and suddenly said, "Hang on, I know... I'mma write a bill!"
No, more often than not, there are industry leaders and other constituents who are on the phone to that legislator telling him he needs to write a bill, and then dictating what's going to be in it.
> With such widespread pushback, it raises the question: How was Wiener caught so off guard?
Because politicians, and not just weiner, have ignorant of technology and just do what they think will get them feelz good votes. They also have the mistaken belief, along with most voters, that feelz good tingles are sufficient to make good law.
To Weiner, something had to be done, and this was something, therefore it had to be done. Now that he understand there will be pushback, he will find something else that needs to be done because something always needs to be done.
We often discuss states seceding. Can the U.S. kick a state out?
The bill could have unintended consequences that reach far beyond California, affecting the entire nation.
Like how section 230 makes internet in China so free?
Fei-Fei Li, often hailed as the "Godmother of AI," warned that "S.B.-1047 will cripple public sector [...]
Wait, how do you sell me on SB1047 without selling me on SB1047?
The entire Kalifornia government is toxic to the rest of the world.
"..."I did not appreciate how toxic the division is," admitted state Sen. Scott Wiener (D–San Francisco), who introduced the bill..."
This should be a part of every bill Wiener introduces or for which he votes. Among the truly imbecilic SF contributions to the State Government, you would be hard-pressed to find a more ignorant example.