A 10-Year Pause on State AI Laws Is the Smart Move
A proposed federal moratorium on state-level AI regulations is a necessary step toward a unified strategy that protects innovation and equity alike.
Congress is currently considering a policy that could define America's technological future: a proposed 10-year moratorium on a broad swath of state-level artificial intelligence (AI) regulations.
While the idea of pausing state legislative action might seem radical to some—and has certainly caught proponents of localized AI governance off guard—it is precisely the bold stroke this moment demands. This is not about stifling oversight, but about fostering mutually assured innovation—a framework where a unified, predictable national approach to AI governance becomes the default, ensuring that the transformative power of AI reaches every corner of our nation, especially the people who need it the most.
The concept of a consistent national strategy has garnered support from a diverse chorus of voices, including Colorado Democratic Gov. Jared Polis, Rep. Jay Obernolte (R–CA), and leading AI developers at OpenAI. They recognize that AI's potential is too vast, and its development too critical, to be balkanized into a patchwork of 50 different regulatory schemes.
At Meta's recent Open Source AI Summit, I witnessed firsthand the burgeoning applications of AI that promise to reshape our world for the better. Consider a health care system, like the one at UTHealth Houston, using AI to proactively identify patients likely to miss crucial appointments. By automatically rescheduling these individuals' appointments, the system saved hundreds of thousands of dollars, but more importantly, it ensured continuity of care for potentially vulnerable patients.
Consider another innovation: AI tools that meticulously analyze data from colonoscopies, significantly increasing the chances of detecting cancerous or precancerous conditions at their earliest, most treatable stages. Or look at the global efforts of the World Resources Institute, leveraging AI and satellite imagery to track deforestation in near real time, providing invaluable data to combat climate change and inform sustainable land-use policies.
These are not abstract academic exercises; they are tangible solutions to pressing human problems, with the potential to drastically improve health care outcomes, facilitate more robust climate forecasts, aid food production, and contribute to more equitable societies.
These green shoots of innovation, however, are incredibly fragile. They require not just brilliant minds and dedicated research, but also a stable and predictable environment in which to grow. A moratorium on disparate state regulations provides precisely this—regulatory certainty. This certainty is a powerful catalyst, unlocking further investment, attracting top-tier talent, and allowing nascent technologies to mature and disseminate across the nation.
The alternative is a landscape where only the largest, most well-funded labs can navigate the regulatory maze, while groundbreaking tools from startups and research institutes—tools that could disproportionately benefit individuals in precarious social, economic, or health conditions—wither on the vine. This is the crux of mutually assured innovation: states collectively leaning into a uniform path to governance, preventing a scenario where innovation becomes a luxury of the few, rather than a right for all.
A hodgepodge of state regulations, however well-intentioned, will inevitably stymie AI innovation. Labs could be subjected to conflicting, sometimes contradictory, compliance schemes. While behemoths like Google or Microsoft might absorb the legal and operational costs of navigating 50 different sets of rules, smaller labs and university research teams would face a disproportionate burden. They would be forced into a perpetual state of vigilance, constantly monitoring legislative trackers, investing in legal counsel to ensure they remain compliant with new provisions, and diverting precious resources away from research and development.
Advocates for states' rights in AI regulation often dismiss these concerns as inflated. Let's, for a moment, entertain that skepticism and play out a realistic scenario.
Imagine just three of the hundreds of AI-related bills currently pending before state legislatures actually pass into law: California's S.B. 813, Rhode Island's S.B. 358, and New York's proposed Responsible AI Safety and Education (RAISE) Act.
- California's S.B. 813: The bill establishes a process for the Attorney General (A.G.) to designate a private entity as a Multistakeholder Regulatory Organization (MRO) that certifies AI models and applications based on their risk mitigation plans. MROs must address high-impact risks including cybersecurity threats, Chemical, Biological, Radiological, and Nuclear threats, malign persuasion, and AI model autonomy, with the A.G. establishing minimum requirements and conflict of interest rules. The MRO has the authority to decertify non-compliant AI systems and must submit annual reports to the Legislature and the A.G. on risk evaluation and mitigation effectiveness.
- Rhode Island's S.B. 358: This bill takes a different tack, seeking to establish "strict liability for AI developers for injuries caused by their AI systems to non-users," according to OneTrust DataGuidance. Liability would apply if the AI's actions were considered negligent "or an intentional tort if performed by a human," with the AI's conduct being "the factual and proximate cause of the injury," and the injury not being "intended or reasonably foreseeable by the user." It even presumes "the AI had the relevant mental state for torts requiring such," a novel legal concept.
- New York's RAISE Act: This act would empower the state's A.G. to regulate "frontier AI models" to prevent "critical harm" (e.g., mass casualties, major economic damage from AI-assisted weaponry, or autonomous AI criminality). It proposes to do so by requiring labs to implement "written safety and security protocol" based on vague "reasonableness" standards and to avoid deploying models that create an "unreasonable risk." The act also mandates annual third-party audits and relies on an A.G.'s office and judiciary that may lack the specialized expertise for consistent enforcement, potentially penalizing smaller innovators more harshly.
The sheer diversity in these approaches is telling. California might mandate specific risk assessment methodologies and an oversight board. Rhode Island could impose a strict liability regime with novel legal presumptions. New York could demand adherence to ill-defined "reasonableness" standards, enforced by an A.G.'s office with manifold other priorities. Now multiply this complexity by 10, 20, or even 50 states, each with its own definitions of "high-risk AI," "algorithmic bias," "sufficient transparency," or unique liability and enforcement standards. The result is a compliance nightmare that drains resources and chills innovation.
There are profound questions about whether states possess the institutional capacity—from specialized auditors to technically proficient A.G. offices and judiciaries—to effectively implement and enforce such complex legislation. The challenge of adjudicating novel concepts like strict AI liability, as seen in Rhode Island's bill, or interpreting vague "reasonableness" requirements, as in the New York proposal, further underscores this capacity gap. Creating new, effective regulatory bodies and staffing them with scarce AI expertise is a monumental undertaking, often underestimated by legislative proponents. The risk, as seen in other attempts to regulate emerging tech, is that enforcement becomes delayed, inconsistent, or targets those least able to defend themselves, rather than achieving the intended policy goals.
As some states potentially reap the economic and social benefits of AI adoption under a more permissive or nationally harmonized framework, residents and businesses in heavily regulated states may begin to question the wisdom of their localized approach. The political will to maintain stringent, potentially innovation-stifling regulations could erode as the comparative advantages of AI become clearer elsewhere.
Finally, the rush to regulate at the state level often neglects full consideration of the coverage afforded by existing laws. As detailed in extensive lists by the A.G.s of California and New Jersey, many state consumer protection statutes already address AI harms. Texas' A.G. has already leveraged the state's primary consumer protection statute to shield consumers from such harms. Though some gaps may exist, legislators ought minimally to do a full review of existing laws prior to adopting new legislation.
No one is arguing for a complete abdication of oversight. However, the far more deleterious outcome is a fractured regulatory landscape that slows the development and dissemination of AI systems poised to benefit the most vulnerable among us. These individuals cannot afford to wait for 50 states to achieve regulatory consensus.
A 10-year moratorium is not a surrender to unchecked technological advancement. It is a strategic pause—an opportunity to develop a coherent national framework for AI governance that promotes safety, ethics, and accountability while simultaneously unleashing the immense innovative potential of this technology. It is a call for mutually assured innovation, ensuring that the benefits of AI are broadly shared and that America leads the world not just in developing AI, but in deploying it for the common good.
Show Comments (11)