Colorado's AI Law Is a Cautionary Tale for the Nation
A rushed attempt to regulate artificial intelligence has left lawmakers scrambling to fix their own mistakes.

Colorado is experiencing a major case of buyer's remorse over the state's new artificial intelligence law—a warning for other states rushing to regulate emerging tech.
A rush to regulate AI in Colorado flipped the right order of policymaking. For fast-moving tech, lawmakers should begin with a "try-first" mentality, allowing innovators to introduce their products. Next, they should study any resulting harms and whether new laws or regulations could address those harms. Only then should effective regulation be pursued, in consultation with a wide range of stakeholders, with rules tailored to the state's capacity and the public's own assessment of the benefits and risks of the technology.
Instead of following these best practices, Colorado's Legislature hastily enacted Senate Bill 24-205: Consumer Protections for Artificial Intelligence—an expansive, complex AI law that it is now scrambling to fix.
The law is a warning shot before a shower of similar state AI bills. It also reflects a flawed view of federalism: State legislators cite congressional "inaction" to justify meddling in national issues, but the Constitution has no such exception.
First and Already Flawed
Colorado's AI law was rushed through in May 2024, making it the first major state AI regulation. Though it doesn't go into effect until February 2026, it has quickly become a model for other states. But it has already proven a cautionary tale. The law's open-ended mandates aim to prevent "algorithmic discrimination," particularly for "high-risk" use cases where AI systems represent a "substantial factor" in making "consequential decisions." AI providers must use "reasonable care" when attempting to comply with these ambiguous new standards or face penalties. New risk management plans and algorithmic impact assessments are mandated along with various transparency and monitoring requirements. Critics—including a coalition of small AI developers—warn these vague directives will stifle innovation.
State lawmakers are now having second thoughts. Democratic Gov. Jared Polis noted in his signing statement that the law would "create a complex compliance regime for all developers and deployers of AI" through "significant, affirmative reporting requirements." He also admitted he was "concerned about the impact this law may have on an industry that is fueling critical technological advancements." Attorney General Phil Weiser recently lamented the "problematic" bill and said "it needs to be fixed." The Colorado Legislature will reconsider the measure in a special session that is set to begin on August 21.
A state AI Impact Task Force formed last year offered no concrete solutions in its January 2025 report. By May, Polis and lawmakers recommended delaying implementation until January 2027, citing confusion and risk to small businesses. This mirrors other states' struggles with more narrowly-focused laws: California's Privacy Protection Agency and New York City's AI Bias Audit Law both delayed enforcement due to compliance concerns. Tech regulation often produces unintended consequences—yet many states seem ready to follow Colorado's path.
Don't Be Like Europe
Colorado lawmakers now admit they don't know how to enforce their law and fear it will drive innovators away. Whether the August special session can fix this remains unclear. But the lesson is obvious: rushing complex AI rules has significant downsides for innovation and particularly harms small firms that lack the resources to navigate vague, costly mandates. Existing civil rights and consumer protection laws already address many of the hypothetical risks. Preemptively policing algorithms for potential discrimination imposes a guilty-until-proven-innocent standard that mirrors Europe's heavy-handed model, which has hobbled its digital economy.
Furthermore, it's unclear to what extent such laws are technically feasible. Bias is an inherent characteristic of AI systems that are trained on massive datasets and instructed to generate the next most probable word. Efforts to train away perceived skew can actually make models more likely to produce outputs that tilt in certain directions.
This was illustrated earlier this year when the European Parliament released a statement about how algorithmic discrimination might be addressed under its massive new AI Act. Officials admitted that "shared uncertainty appears to prevail as to how the AI Act's provision on the processing of special categories of personal data for avoiding discrimination should be interpreted."
A National AI Framework Is Needed
Unfortunately, this flawed approach is spreading. Colorado, California, New York, and Illinois account for a quarter of the over 1,000 AI-related bills currently pending throughout the states. However, some states have resisted: Virginia Gov. Glenn Youngkin vetoed a similar bill, citing existing protections and the risk to jobs and investment. Texas and Connecticut scaled back their proposals after pushback.
State AI mandates could also undermine federal priorities. The Trump administration's AI Action Plan calls for a coordinated, try-first approach to maintain U.S. leadership over China—an approach incompatible with Colorado-style red tape.
Many state AI proposals suffer from a savior complex, claiming to protect all Americans from AI risks. While AI advances surely introduce national security threats, no state has the authority to impose its selected safeguards on the rest of the country. Congressional inaction on the AI front is not an invitation for any state to assume that role in this context. They can regulate how their residents use AI in specific contexts—such as by specifying training requirements for professionals using AI in sensitive contexts—but they cannot dictate development for the entire country.
Congress should create a light-touch national AI framework to prevent a patchwork of conflicting state rules. In his signing statement for the bill, Polis rightly identified the need for a "cohesive federal approach" that is "applied by the federal government to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines."
A federal AI bill could define limited roles for state governments while protecting interstate AI commerce from undue parochial interference. America can do this without importing Europe's cumbersome model.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Their Artificial Insemination law has jerked a lot of folks around.
State lawmakers are now having second thoughts. Democratic Gov. Jared Polis noted in his signing statement that the law would "create a complex compliance regime for all developers and deployers of AI" through "significant, affirmative reporting requirements." He also admitted he was "concerned about the impact this law may have on an industry that is fueling critical technological advancements." Attorney General Phil Weiser recently lamented the "problematic" bill and said "it needs to be fixed."
So to be the most libertarian governor in the solar system merely requires you say stuff like, "Not too sure about this..." as you place your signature on the bill. That tracks.
"We have to ... pass the bill to ... find out what's in it."
We have to pass the bill to realize how fking stupid we are.
The bill passed with a veto-proof majority.
Maybe they should mandate that all AI in the state use Marijuana. Leave that up to the developers on how computers can actually consume drugs, but failure to comply is still a crime.
"A National AI Framework is needed"
No. No. And No. WTF kind of progtard stupidity is a "Libertarian" magazine supporting?
Exactly. This rag has long since lost sight of any libertarian individualist principles.
No, no, no. Just NO.
Sounds like the authors actually believe politicians could actually effective in actually managing private businesses in which they actually have no experience and no skin in the game.
Actually, politicians don't, because if they did, they wouldn't have to resort to getting elected to make a living.
To be only slightly sarcastic, this recommendation is so far from useful as to be, well, useless. If politicians actually took this advice and followed this advice ... they'd never regulate anything and be out of work.
I'd much rather we paid politicians and they did nothing. Call it a lottery for slimeballs. Let them run for office just like now, give them all the publicity the glory hounds want, and in exchange, they do absolutely nothing.
A state AI Impact Task Force formed last year offered no concrete solutions in its January 2025 report.
Looking at the bio's on that task force, it is exactly what one would expect - from even a 'national task force' which the author seems fond of. Mostly cronies and tech bros whose primary function is to make sure that nothing is regulated unless it delivers benefits to those who fully intend to capture any regulation. Enough bureaucrats and political consultants to make sure that govt bureaucrats control the primary agenda and implementation. A smidgen of token wokes to ensure that 'diversity' is achieved and that no protest will ensue from the Hispanic-Asian lesbian RPG community.
I do agree that there is the need to discuss the public impact of AI. But it doesn't surprise me that the issue is already being framed by Reason as - make it national (where DC and Big Tech lobbyists/bureaucrats will buy whatever decisions they choose to buy from their panel of experts) v make it state-based (which is likely to become a clusterfuck and will harm the dreams of Big Tech oligarchs). Where there is NO discussion of why this is yet another fucking takeover of public discussion by appointed task forces of chosen 'experts'.
This is exactly where a panel of randomly selected citizens could raise actual concerns. Rather than the usual panel of appointed experts issuing a report of meaningless gibberish written by an incompetent AI bot and filed in a cabinet deep within a warehouse in Cheyenne Mountain that will lead to moving decisions upward to DC (or perhaps Davos if DC also finds it difficult to construct anything but a meaningless gibberish report).
I hope at least that those experts chowed down on quality donuts.
>also reflects a flawed view of federalism: State legislators cite congressional "inaction" to justify meddling in national issues, but the Constitution has no such exception.
It doesn't need such an exception. States are sovereign. The idea that we need to keep deferring to DC is why the country is in the mess it's in.
Colorado's law might suck but it's their perogative. What's next - defer to DC for school curricula? To set speed limits?
^Well Said +1000000000000.
As the government just keeps pushing to control the media again using AI as an excuse.
A good example of why a federal framework is worse than useless. A flirty Meta AI bot entices an impaired elderly man (from NJ) to visit NYC. He falls in the train parking lot and dies from that. The bot gave no indication it was real or a bot.
Everyone knows exactly what will happen with a federal regulation. Big Tech will be given complete immunity to do anything they fucking want no matter how pathological or deluded those bots become. And of course Reason will justify whatever VC's/etc buy in DC
Golly, your "libertarian" governor crush is on the wrong side yet again. How can that be when he's both gay and a "moderate" Democrat. I expect to see him at the top of the LP ticket in 2028.
You favor AI regulation? Polis opposed it at the state level and killed it by appointing a task force that would do that dirty work - or at least turn it all into gibberish so it can't go anywhere
no state has the authority to impose its selected safeguards on the rest of the country.
Bet you'd be OK if it was the USDC doing it.
If I were a Chinese or Russian intelligence official and I wanted to make sure my government developed AGI before the USA did, I would lobby for just this kind of legislation.
Keep in mind, AGI is exponential. If there is any serious head start given to any nation, there is no such thing as "catch up" or MAD like a nuclear arms race.
The first across the finish line is likely to be the undisputed permanent winner.
I trust a democracy to deliver an AGI that will not be deployed to achieve world domination more than an autocracy. Not much more, but more. Guard rails and brakes are a very smart idea, yet one we cannot use just yet. We must red line the engine a while, as dangerous as it is.
Maybe there's some private knowledge that indicates the Chinese or Russians want to invent AGI. But everything I've seen publicly indicates they want to IMPLEMENT small/local language models. Those can be easily installed on an individual device and cost almost nothing extra in either energy or 'latest GPU's'. Can turn into serious specialists so police can do surveillance and factories can do robots and bureaucrats can be Confucius and CCC can run military ops. Those are the ones that can be manufactured and exported to countries that don't have the interest/skills/energy in building out data centers. And I suspect that 1000 people running 1000 small/local language models (and other models) achieve magnitudes more than one rule-the-world type in search of the perfect AGI.
AGI looks to me like a delusion of those who want to suck endless streams of money into a black hole in hopes of exactly what you posit - that the black hole can't be questioned even for a nanosecond because 'world domination' is at risk.
Rule of thumb is that a 20 Gb VRAM card can currently 'run' a 10 billion parameter LM. That's $700. Well within the cost for not needing to build a data center with a nuclear reactor powering it. US AI companies are focused entirely on upping the number of parameters in their models. The Chinese (prob not the Russians) are interested in reducing the price of graphics cards that can run whatever size LM they see a mass use case for.
"Critics—including a coalition of small AI developers—warn these vague directives will stifle innovation. "
Weren't you paying attention o the last admin? This is a feature. Their goal was to limit it to the hands of a few gifted player while shutting everyone else out. Making it too costly. Thus locking government elite control of AI.