Colorado's AI Law Is a Cautionary Tale for the Nation
A rushed attempt to regulate artificial intelligence has left lawmakers scrambling to fix their own mistakes.
Colorado is experiencing a major case of buyer's remorse over the state's new artificial intelligence law—a warning for other states rushing to regulate emerging tech.
A rush to regulate AI in Colorado flipped the right order of policymaking. For fast-moving tech, lawmakers should begin with a "try-first" mentality, allowing innovators to introduce their products. Next, they should study any resulting harms and whether new laws or regulations could address those harms. Only then should effective regulation be pursued, in consultation with a wide range of stakeholders, with rules tailored to the state's capacity and the public's own assessment of the benefits and risks of the technology.
Instead of following these best practices, Colorado's Legislature hastily enacted Senate Bill 24-205: Consumer Protections for Artificial Intelligence—an expansive, complex AI law that it is now scrambling to fix.
The law is a warning shot before a shower of similar state AI bills. It also reflects a flawed view of federalism: State legislators cite congressional "inaction" to justify meddling in national issues, but the Constitution has no such exception.
First and Already Flawed
Colorado's AI law was rushed through in May 2024, making it the first major state AI regulation. Though it doesn't go into effect until February 2026, it has quickly become a model for other states. But it has already proven a cautionary tale. The law's open-ended mandates aim to prevent "algorithmic discrimination," particularly for "high-risk" use cases where AI systems represent a "substantial factor" in making "consequential decisions." AI providers must use "reasonable care" when attempting to comply with these ambiguous new standards or face penalties. New risk management plans and algorithmic impact assessments are mandated along with various transparency and monitoring requirements. Critics—including a coalition of small AI developers—warn these vague directives will stifle innovation.
State lawmakers are now having second thoughts. Democratic Gov. Jared Polis noted in his signing statement that the law would "create a complex compliance regime for all developers and deployers of AI" through "significant, affirmative reporting requirements." He also admitted he was "concerned about the impact this law may have on an industry that is fueling critical technological advancements." Attorney General Phil Weiser recently lamented the "problematic" bill and said "it needs to be fixed." The Colorado Legislature will reconsider the measure in a special session that is set to begin on August 21.
A state AI Impact Task Force formed last year offered no concrete solutions in its January 2025 report. By May, Polis and lawmakers recommended delaying implementation until January 2027, citing confusion and risk to small businesses. This mirrors other states' struggles with more narrowly-focused laws: California's Privacy Protection Agency and New York City's AI Bias Audit Law both delayed enforcement due to compliance concerns. Tech regulation often produces unintended consequences—yet many states seem ready to follow Colorado's path.
Don't Be Like Europe
Colorado lawmakers now admit they don't know how to enforce their law and fear it will drive innovators away. Whether the August special session can fix this remains unclear. But the lesson is obvious: rushing complex AI rules has significant downsides for innovation and particularly harms small firms that lack the resources to navigate vague, costly mandates. Existing civil rights and consumer protection laws already address many of the hypothetical risks. Preemptively policing algorithms for potential discrimination imposes a guilty-until-proven-innocent standard that mirrors Europe's heavy-handed model, which has hobbled its digital economy.
Furthermore, it's unclear to what extent such laws are technically feasible. Bias is an inherent characteristic of AI systems that are trained on massive datasets and instructed to generate the next most probable word. Efforts to train away perceived skew can actually make models more likely to produce outputs that tilt in certain directions.
This was illustrated earlier this year when the European Parliament released a statement about how algorithmic discrimination might be addressed under its massive new AI Act. Officials admitted that "shared uncertainty appears to prevail as to how the AI Act's provision on the processing of special categories of personal data for avoiding discrimination should be interpreted."
A National AI Framework Is Needed
Unfortunately, this flawed approach is spreading. Colorado, California, New York, and Illinois account for a quarter of the over 1,000 AI-related bills currently pending throughout the states. However, some states have resisted: Virginia Gov. Glenn Youngkin vetoed a similar bill, citing existing protections and the risk to jobs and investment. Texas and Connecticut scaled back their proposals after pushback.
State AI mandates could also undermine federal priorities. The Trump administration's AI Action Plan calls for a coordinated, try-first approach to maintain U.S. leadership over China—an approach incompatible with Colorado-style red tape.
Many state AI proposals suffer from a savior complex, claiming to protect all Americans from AI risks. While AI advances surely introduce national security threats, no state has the authority to impose its selected safeguards on the rest of the country. Congressional inaction on the AI front is not an invitation for any state to assume that role in this context. They can regulate how their residents use AI in specific contexts—such as by specifying training requirements for professionals using AI in sensitive contexts—but they cannot dictate development for the entire country.
Congress should create a light-touch national AI framework to prevent a patchwork of conflicting state rules. In his signing statement for the bill, Polis rightly identified the need for a "cohesive federal approach" that is "applied by the federal government to limit and preempt varied compliance burdens on innovators and ensure a level playing field across state lines."
A federal AI bill could define limited roles for state governments while protecting interstate AI commerce from undue parochial interference. America can do this without importing Europe's cumbersome model.
Show Comments (5)