The Senate Is One Step Closer To Passing a 10-Year Moratorium on State AI Regulation
The Senate parliamentarian says the 10-year AI moratorium may be passed by a simple majority through the Senate's budget reconciliation process.

For the past several years, states have been trying to regulate the burgeoning artificial intelligence (AI) industry. Out of the 635 AI-related bills considered in 2024, nearly 100 were signed into law, and 1,000 more bills have been proposed this year. A federal law that would prevent the proliferation of reactionary local AI regulation passed a key hurdle toward its potential implementation on Saturday.
The U.S. Senate Committee on Commerce, Science, and Transportation released its budget reconciliation text on June 5, which includes the language imposing a moratorium on state AI legislation. The Senate parliamentarian, the nonpartisan official in charge of interpreting Senate rules, decided on Saturday that the moratorium does not violate the Byrd rule, which blocks all non-budgetary matters from inclusion in reconciliation bills, and may be passed by a simple majority via the budget reconciliation process.
The section conditions the receipt of federal funding from the Broadband Equity, Access, and Deployment (BEAD) program on compliance with a 10-year pause on local AI regulation. Reason's Joe Lancaster explains that BEAD "authorized more than $42 billion in grants, to 'connect everyone in America to reliable, affordable high-speed internet by the end of the decade.'" BEAD was part of the Infrastructure Investment and Jobs Act, which was signed into law in November 2021. By June 2024, BEAD had "not connected even 1 person with those funds," said Brendan Carr, chair (then-commissioner) of the Federal Communications Commission. In March, President Donald Trump paused the program.
On June 6, the National Telecommunications and Information Administration, the bureau inside the Commerce Department responsible for reviewing applications for and dispersing BEAD funding, issued a policy notice that voided all previously approved final proposals. No BEAD funding has yet been disbursed.
The moratorium does not directly preempt local AI regulation but forbids states from enforcing "any law or regulation…limiting, restricting, or otherwise regulating artificial intelligence… entered into interstate commerce" for 10 years following the enactment of the One Big Beautiful Bill Act. The committee described the provision as preventing states from "strangling AI deployment with EU-style regulation."
Some states have already passed stringent AI legislation, including New York's Responsible AI Safety and Education (RAISE) Act and Colorado's Consumer Protections for Artificial Intelligence. These laws are "prime examples of costly mandates that could be covered by the moratorium," says Adam Thierer, senior fellow for the Technology and Innovation team at the R Street Institute. Moreover, in the absence of the moratorium, Thierer says "a parochial patchwork of rules will burden innovation, investment, and competition in robust nationwide AI systems." Thierer prefers outright federal preemption over the current proposal, but is hopeful that state lawmakers will think twice about imposing costly AI mandates when they stand to lose federal grants for doing so.
Neil Chilson, head of AI for the Abundance Institute, says withholding billions of dollars of BEAD funding encourages non-enforcement of poorly designed and heavy-handed state laws, especially those that "self-identify as AI 'anti-bias' regulations." California's Privacy Protection Agency's (CPPA) proposed AI regulation, which requires businesses to allow users to opt out of automated decision-making technology, is one such law. Chilson and Taylor Barkley, the Abundance Institute's director of public policy, report that, by the CPPA's own estimates, the regulation "will impose $3.5 billion of compliance costs in the first year, with average annual costs around $1 billion [and] will trigger job losses peaking at roughly 126,000 positions by 2030."
Some groups, including the Center for Democracy and Technology, have raised concerns that the moratorium "could prevent states from enforcing even basic consumer protection and anti-fraud laws if they involve an AI system." Will Rinehart, senior fellow at the American Enterprise Institute, explains that "privacy laws, consumer protection rules, and fraud statutes still apply to AI companies" and that the moratorium will not prevent states from using these laws to address AI issues.
Even though the Senate parliamentarian has ruled that the AI moratorium section may be included in the reconciliation bill, the provision is controversial enough that the Senate may remove it altogether. If it does, a patchwork of state and local regulations will slow the development of American AI by imposing billions of dollars of regulatory costs on the industry.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.
Luddites gotta lud.
AI is worst-case news for obsolete right-wing dropouts.
I feel certain that lawmaker chicks like AOC and Jasmin Crockett have a deep understanding of the complexities of AI and will craft regulations and laws that will benefit everyone.
You're entitled to be sceptical of AOC's and Crockett's economic and legal wisdom here, but as far as understanding AI, I suspect that AOC, given her science background, probably gets it better than most, certainly more than the superannuated clowns and motel minge elsewhere in the House. And I am sure that they both have a better understanding than PV1 intellects like you.
I don't really know if anyone is at fault but discussants and politicians and general public has no real idea what AI is.
IT is almost all the 40-year old NLP and expert systems. Hardly 'intelligence' in any sense of the word.
Congress should pass their own regulation prior to blocking State level regulations.
The most libertarian comment so for - - - - - - - - - - - -
California's Privacy Protection Agency's (CPPA) proposed AI regulation, which requires businesses to allow users to opt out of automated decision-making technology, is one such law. Chilson and Taylor Barkley, the Abundance Institute's director of public policy, report that, by the CPPA's own estimates, the regulation "will impose $3.5 billion of compliance costs in the first year, with average annual costs around $1 billion [and] will trigger job losses peaking at roughly 126,000 positions by 2030."
I think this is tremendously misleading about the CA regulation proposal.
The costs they are pulling from that CPPA report are not from "requir[ing] businesses to allow users to opt out of automated decision-making technology." Those numbers are totals for complying with the whole package changes to regulations. The costs of complying with the "ADMT" (Automated Decision-making Technology) regulations are about the same as the additional costsfrom requiring cybersecurity audits of businesses that handle large amounts of consumer data ("PI", which is "personal infomation"), especially sensitive data ("SPI" = sensitive personal information).
It is also the total costs to all businesses affected by the regulations, which the report estimates to possibly include over 50,000 businesses. Here is the paragraph where the think tankers got that number they gave to the author of this article:
Combining the cost estimates for CCPA updates, CSA, ADMT, and RA described in Section 2.4, we estimate total costs for the proposed regulations to be $3.5 billion in the first year and to average $1.0 billion across the first ten years following implementation. Estimated initial costs for a typical business range from $7,045 to $122,666. The estimated ongoing costs for a typical business are $26,015.
The paragraph after that one breaks it down by how much each part of the proposed changes will cost:
First-year total costs are comprised of approximately $369M in costs associated with updates to CCPA regulations, $2.0B in costs associated with CSA, $207M in costs associated with RA, and $835M in costs associated with ADMT. While CCPA updates do
not have estimated ongoing costs, there are ongoing annual costs associated with each of other elements including CSA (estimated range of $308M-$615M per year), RA (estimated range of $31M–$62M per year), and ADMT (estimated range of $125-$250M
per year).
It goes on later to justify the regulations by pointing to statistics on the financial losses due to cybercrime that it thinks that these regulations will mitigate. It doesn't claim that losses avoided in the first year that they would be in full effect (2027) will fully pay for the initial cost of compliance, but their estimates of avoided losses would be much greater than the ongoing costs.
It is, naturally, important to be skeptical of those claims. But if the costs of the regulations in that report are taken as at least somewhat credible, then so should the claims of the benefits. Or, people can give in completely to their confirmation bias and only believe claims that support their preferences while assuming that any claims that contradict them are false.
This article is fearmongering about what regulations on AI would cost without saying anything accurate about what those proposed regulations are supposed to do. Looks to me like Reason is trusting the billionaire tech bros to only provide benefits to the average person, and they aren't the least bit skeptical over how those corporations might use AI to manipulate and cheat consumers or be careless with our data, opening us up to identity theft and other types of fraud.
But that is par for course for these kinds of libertarians. People that got rich in business are the heroes of capitalism, which proves that we should be letting them do whatever they think is best, and the Invisible Hand will correct any abuses or dangers.
If it does, a patchwork of state and local regulations will slow the development of American AI by imposing billions of dollars of regulatory costs on the industry.
I'm confused as to what the libertarian argument here that's proposed in the article.
If we get back to first principles... why would it be a good thing to have a far-away, impenetrable bureaucracy decide what laws can or can't be passed by the local representative governments? This "Libertarians for a One-World Government" direction is disturbing.
Edit:
And to be clear, we're not talking about a local law that violates a clear provision or amendment in the constitution, but a regulatory rule on a specific business practice or product. To be clear(er), I don't like the idea of anyone passing regulations that limit AI research, but that should be for the local governments to decide.
And further, I take great pleasure in watching California regulate Silicon Valley out of existence. Let people Learn To Code elsewhere. The idea of the world's bullshit trillion dollar tech companies all huddled together in a 185 sq mile area, where they're all having lunch with each other, sending their kids to the same retarded rainbow bedecked school rooms has had a detrimental effect on the culture. Spread it out a little bit...
Look, I love LLM AIs. But I will be the first to admit that this is no different than flying cars.
Great idea, super exciting, the future is now. I love it all.
But I also know that people can't even drive their cars on the ground, let alone what untold horrors they could cause if they were suddenly airborne. Even assuming the perfect human controlling them, just take mechanical failure into account.
I have been neck-deep in AI stuff for awhile now. And I completely understand the desire to pump the brakes until folks can wrap their heads around it. For one, there are huge ethical implications with it - which frankly need to be considered and decided upon for countless professions before unloading AI onto the open market. Medicine - is reliance on an AI-assisted diagnosis that's wrong a form of malpractice? Law - have the rules of professional responsibility for diligence and competence been violated if outsourced to AI? Some judges are already establishing court rules about this kind of thing. For both Medicine and Law, is confidentiality breached by utilizing a publicly available (and data collecting) AI such as ChatGPT? Consider liabilities - does a plaintiff have a cause of action against both the AI company AND the person who relied up on it? Consider work product in ANY field - does an employee/candidate who relies on AI data deserve the recognition if their work was done in part by AI, or does the AI deserve that accolade? To say nothing of its utilization by the media, which is probably the most corrupt of purposes to which it could be abused.
I once heard AI well-described as like a toaster. We know how to toast bread in an oven or a pan, but the toaster just does it faster and better because it's singularly devoted to the process. But we're also a little annoyed when we go to a restaurant and pay $1.50 for a side of toast with our breakfast, because we know that no culinary skill went into it; the toaster did all the work.
The question of humans taking credit, responsibility, and accountability for AI-derived work is something that should really be addressed. And if that means yanking back on the reigns on a tech that I'm super excited about, then I'm not going to suggest it's entirely unreasonable.
Carry on.