Senate Votes 99–1 To Remove AI Moratorium from 'Big, Beautiful Bill'
Now nearly 100 state AI laws will remain in force—and nearly 1,000 more are already waiting in the wings.

The Senate removed the AI moratorium provision from the reconciliation bill in a 99–1 vote early Tuesday morning. Sen. Marsha Blackburn (R–Tenn.) and Sen. Maria Cantwell (D–Wash.) co-sponsored the amendment to eliminate the provision after Blackburn reneged on a compromise with Sen. Ted Cruz (R–Texas) to amend it over the weekend.
The Senate Commerce, Science, and Transportation Committee released the text of its original AI moratorium provision on June 5. This earlier version imposed a 10-year period during which $500 million of newly appropriated Broadband Equity, Access, and Deployment (BEAD) funding for "the construction and deployment of infrastructure for the provision of artificial intelligence models" could be withheld from states found "limiting, restricting, or otherwise regulating artificial intelligence models" by the commerce secretary. Last week, the Senate parliamentarian ruled that the moratorium satisfied the Byrd rule, which allows only budget issues to be considered in reconciliation text, after the committee specified that it would not condition all $42 billion of BEAD funding appropriated in the 2021 Bipartisan Infrastructure Law on the nonenforcement of state-level AI laws.
Still, intraconservative debates about the AI moratorium continued.
Blackburn did "not want the moratorium to apply to state laws affecting recording artists…and laws affecting child sexual abuse material," explained the Institute for Law and AI. Sen. Rand Paul (R–Ky.) opposed the moratorium as federal overreach that compromises state-level experimentation in AI regulation, according to MarketPulse. Meanwhile, Cruz defended the AI moratorium as preventing states from "strangling AI deployment with EU-style regulation," according to talking points put out by the committee, which he chairs.
The amended AI moratorium, agreed to by Blackburn over the weekend, conditioned the disbursement of the $500 million BEAD funding to states on their nonenforcement of "any law or regulation…regulating artificial intelligence models" for five years. States would still be allowed to enforce "generally applicable" laws, like those "pertaining to unfair or deceptive acts or practices, child online safety, child sexual abuse material, rights of publicity, [and] protection of a person's name, image, voice, or likeness [that] may address, without undue or disproportionate burden, artificial intelligence."
The Institute for Law and AI warned that the compromise's "undue or disproportionate burden" language would "generate additional litigation [and] uncertainty" while failing to shield those laws Blackburn sought to protect. Neil Chilson, head of AI policy for the Abundance Institute, disagreed and argued that an expanded "generally applicable law" exemption would have made it more likely that those laws Blackburn sought to shield would have been protected.
Despite signaling support for the provision's revised language on Sunday, Blackburn backed out late Monday night. Blackburn said, "This provision could allow Big Tech to continue to exploit kids, creators, and conservatives," The New York Times reports.
The Blackburn-Cruz compromise represented "a much-needed measure that will help America address the growing patchwork of over 1,000 AI-related laws popping up across the nation," said Adam Thierer, senior fellow for the technology and innovation team at the R Street Institute. Chilson tells Reason the elimination of the AI moratorium is disappointing but not surprising.
Now that the provision has been struck from the reconciliation bill, only time will tell if a patchwork of conflicting state-level regulations hamstrings the American AI industry. Though the moratorium provision has been defeated, Congress still "has a job to do to establish a federal framework that keeps the US open for innovation, and that work continues," says Chilson.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Remember when a larger, more supreme layer of government banning a ban was terrible and unlibertarian?
Yes. But I also remember when a larger, more supreme layer of government overriding zoning for housing was a good thing. It’s all so confusing!
Not to mention the larger, more supreme layer of government opening national borders, and forcing lesser, local layer governments to deal with the problems. Until the supreme layer changed the border policy, and then the local layer governments decided not to cooperate.
Confusing, indeed.
Well, now we are in for utter chaos from a group of Camus-well-meaning state governments making bizarre rules for AI, informed from their deep knowledge of the subject, for their massive budgets.
No ChatGPT in MS. AI-only thinking in CA. Products can’t ship/distribute.
Fuck em’. I know how to VPN.
Good. Let federalism have at it and may the best state win.
This is not a situation where federalism can work. No company large enough to deploy AI can make a profit entirely within one state. That inevitably means that companies must design to the lowest common denominator. Which means we customers are going to be saddled with the most restrictive state's regulations regardless of where we (or the company) actually live.
Federalism assumes that state regulations stay (mostly) within state boundaries so we customers can experiment and compare notes. Whenever that assumption is violated, we lose the ability to compare and federalism fails.
That inevitably means that companies must design to the lowest common denominator. Which means we customers are going to be saddled with the most restrictive state's regulations regardless of where we (or the company) actually live.
So, like cars.
Worse. There is at least a large-enough and identifiable enough non-CA market to make the manufacture of some non-CA compliant cars profitable. With AI, you usually can't even identify where the caller/customer is coming from to figure out which laws you have to comply with.
I guess no one's ever heard of the 10th Amendment.
Ironically, this is more or less correct.
The people who have not heard of it the most are judges.
Oh they heard of, they just don't believe in.
Violation of the 10th has led to large federal caseloads in different districts giving conflicting outcomes.
Shrink the fed.
Who knows more about AI than smart lawmakers?
My dead grandmother, whose VCR was always blinking "12:00"?
They voted to keep in place federal spending on various benefits for ILLEGAL immigrants.
Our leaders hate us and want us to go away.
Leaders lead. Rulers rule. We don't have leaders.
Does this mean that Skynet can't cross state borders?
And the BIG elephant in the room is...............
WTF is the 'Feds' doing subsidizing ("$500 million of newly appropriated") the states?
I don't see an enumerated power to subsidize the state's infrastructure.
There isn't much left that is the USA in this nation.
Practically all of it reflects that of a [Na]tional So[zi]alist nation.
And anyone has to wonder why it's all falling apart; like socialist nations always do.