California Lawmakers Face Backlash Over Doomsday-Driven AI Bill
The bill could have unintended consequences that reach far beyond California, affecting the entire nation.
This month, the California State Assembly is set to vote on Senate Bill 1047, legislation that could significantly disrupt AI research. If history is any guide, California could once again drag the entire country towards unpopular tech regulation. Yet the bill's sponsors have been caught off guard by the backlash from academics and the AI industry. "I did not appreciate how toxic the division is," admitted state Sen. Scott Wiener (D–San Francisco), who introduced the bill.
Academics, open-source developers, and companies of all sizes are waking up to the threat this bill poses to their future. S.B. 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, mandates extensive safety protocols and imposes harsh liabilities on developers for the potential misuse of their AI models.
Fei-Fei Li, often hailed as the "Godmother of AI," warned that "S.B.-1047 will cripple public sector and academic AI research. Open-source development is important in the private sector, but vital to academia, which cannot advance without collaboration and access to model data." Echoing her concerns, Ion Stoica, director of the University of California, Berkeley's Sky Computing Lab, argued that the bill "will hurt innovation in California, and it will result in a more dangerous, less safe world."
Tech entrepreneur Andrew Ng pointed out that "this proposed law makes a fundamental mistake of regulating AI technology instead of AI applications, and thus would fail to make AI meaningfully safer."
The backlash isn't limited to academics and large institutions. Hundreds of "little tech" startups are also voicing their concerns. Leading the resistance, startup incubator Y Combinator and venture capital firm Andreessen Horowitz have each gathered letters of opposition, signed by over a hundred startups.
Even lawmakers, including Rep. Ro Khanna (D–Calif.), Rep. Zoe Lofgren (D–Calif.), and former San Francisco interim mayor Mark Farrell have publicly opposed Wiener's bill.
With such widespread pushback, it raises the question: How was Wiener caught so off guard?
Within six months of Wiener introducing S.B. 1047, a wave of legislators and regulators changed their minds as they caught up to the available evidence. Lawmakers of different parties, factions, and countries began to adopt a more optimistic tone.
The new Republican platform urged the party to "support AI Development rooted in Free Speech and Human Flourishing." Demonstrating the real-world benefits of AI, Rep. Jennifer Wexton (D–Va.) used technology from Eleven Labs to restore her voice after losing it to a neurological condition. Even the Federal Trade Commission voiced support for open-source AI.
This shift in sentiment was a loud rebuttal to the AI Safety movement, which has been heavily backed by billionaires like Sam Bankman-Fried and Dustin Moskovitz, as well as multimillionaire Jaan Tallinn. With hundreds of millions of dollars behind it, the movement has pushed to pause AI research, driven by the doomsday prediction that sufficiently advanced AI could lead to human extinction.
While mainstream experts focused on their work, lawmakers were swayed by one-sided narratives from the Open Philanthropy-funded Center for AI Safety (CAIS), which claims that "mitigating the risk of extinction from AI should be a global priority." Until recently, policymakers remained largely unaware of just how disconnected these narratives were from the broader AI community.
As federal legislators develop a more grounded understanding of AI, they are gravitating toward evidence-based regulation. This shift led economist Tyler Cowen to declare that "AI safety is dead" in a Bloomberg editorial following Senate Majority Leader Chuck Schumer's unveiling of a bipartisan Senate roadmap that aims to invest in AI while addressing AI misuse. "Schumer's project suggests that the federal government is more interested in accelerating AI than hindering it," Cowen wrote.
The AI Safety movement initially gained traction by exploiting a political vacuum. Mainstream academics and industry had little reason to engage in legislative outreach before the introduction of S.B. 1047. Many were shocked that lawmakers would seriously consider the claims made by CAIS and Wiener. "I'm surprised to see [S.B. 1047] being seriously discussed in the California legislature," remarked Ethan Fast from VCreate, a company using machine learning to treat diseases. AI researcher Kenneth O. Stanley offered a sharper critique: "It almost seems like science fiction to see an actual bill like this. A more useful bill would be narrower in scope and focus on addressing specific near-term harms."
Legislators across the country are now turning to an evidence-based framework for AI regulation, focusing on targeting malicious actors who misuse AI rather than researchers themselves. As Ng points out, S.B. 1047 "ignores the reality that the number of beneficial uses of AI models is, like electric motors, vastly greater than the number of harmful ones." Even as most of the country moved beyond the AI Safety narrative, the possibility that California radicals could dictate the nation's future remains a haunting prospect.
Show Comments (31)