Artificial Intelligence

Trump Just Released a Framework To Govern AI. Here Are 7 Key Takeaways.

The National AI Policy Framework is a return to the administration's pro-AI position.

|

Only weeks after designating Anthropic a supply chain risk, the Trump administration is returning to a pro-AI stance with Friday's release of the National AI Legislative Framework. 

The framework, which was commissioned via executive order in December, includes seven pillars to advance American AI development. Here are some of the top takeaways:  

1. Clarifying Prohibited AI-Generated Content To Avoid Excessive Litigation

The first pillar directs Congress to build upon the bipartisan TAKE IT DOWN Act, which outlaws the nonconsensual online publication of intimate photos and videos, including those created with AI. While well-intentioned, Reason's Elizabeth Nolan Brown has warned that it can be "easily wielded as a jawboning tool to get tech platforms to do an administration's bidding." At the same time, the administration cautions Congress against "ambiguous standards about permissible content" and "open-ended liability" to avoid excessive litigation. 

2. Enabling AI Data Centers To Provide Their Own Power 

The second pillar is a bit of a double-edged sword. On the one hand, it directs Congress to "ensure that residential ratepayers do not experience increased electricity costs" as a consequence of AI. On the other, it recommends streamlining federal permitting to allow AI developers to more easily deploy on-site power generation to avoid straining the grid. To this end, Sen. Tom Cotton (R–Ark.) introduced the DATA Act in January to exempt these electrical utilities from federal regulation.

3. Deferring to the Judiciary on Copyright 

In a remarkable degree of deference to the judicial branch, especially for the Trump administration, the framework recommends that Congress allow courts to sort out whether the training of AI models on copyrighted material violates copyright laws. (A federal judge recognized Anthropic's right to train its large language models on lawfully acquired copyrighted materials last June.) It also recognizes the potential need for federal legislation on the unauthorized use of somebody's likeness for commercial purposes, e.g., using an AI-generated avatar of a famous actor to promote a product without his permission, but says "Congress should prevent persons from abusing such a framework to stifle free speech online." However, Sen. Chris Coons' (D–Del.) bipartisan NO FAKES Act would do just that by holding platforms liable for hosting such replicas.  

4. Preventing Government Censorship

Surprisingly, given the administration's demonstrated hostility to the First Amendment, the framework calls on Congress to empower "Americans to seek redress from the Federal Government for agency efforts to…dictate the information provided by an AI platform." This is rather ironic, considering the administration's recent attempt to compel certain outputs from Anthropic and then designating it a supply chain risk after it refused to do so. 

5. Preventing Regulatory Proliferation  

The framework requests that federal datasets be made AI-accessible "for industry and academia" ("the public" would suffice)—providing business and researchers access to a wealth of taxpayer-funded information that may be used to better understand and solve problems—and establishing regulatory sandboxes for AI experimentation. This section also requests that Congress "not create any new federal rulemaking body to regulate AI." This recommendation contrasts with New York's RAISE Act, signed into law by Democratic Gov. Kathy Hochul in December, which created the Office of AI Transparency within the Department of Financial Services to regulate large developers' deployment of frontier AI models. 

6. Collecting Information on AI's Labor Market Effects 

Recommends that Congress ensure existing education and workforce training programs "affirmatively incorporate AI training" through nonregulatory means. (Considering daily workplace use of AI has tripled since 2023, this is an obvious recommendation that hardly needs to be made.) It also suggests that Congress "expand Federal efforts to study trends in task-level workforce realignment." While this is vague, such data could conceivably be leveraged to prohibit or penalize private companies for incorporating AI into their businesses, as one New York bill seeks to do. 

7. Preempting Onerous State Laws 

Perhaps the most important recommendation is for Congress to "preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard," as President Donald Trump has sought to circuitously accomplish by conditioning certain federal broadband funding on the nonenforcement of onerous state AI laws and regulations via his December executive order. Specifically, such legislation should preempt state regulations concerning AI development; use of AI for otherwise lawful activities, effectively recognizing Americans' right to compute (which a New Hampshire bill is seeking to achieve at the state level); and prohibit the penalization of AI developers for unlawful use by third parties. And to do all of these things while respecting "key principles of federalism" by not interfering with states' police powers, zoning laws, or regulations on local government use of AI. 

Adam Thierer, senior fellow at the R Street Institute, regards Trump's AI framework as evidence of the president's "try-first" approach to AI policy, as opposed to the "regulate-first" vision described in Sen. Marsha Blackburn's (R–Tenn.) TRUMP AMERICA AI Act. This bill, which includes the Kids Online Safety Act and the NO FAKES Act, would dramatically hinder AI development by imposing a punitive liability regime on AI developers and deployers. Neil Chilson, head of AI policy at the Abundance Institute, says the "serious framework" is "a clear continuation of Trump's belief that America should lead in AI and that the way to do that is to remove barriers." Both Thierer and Chilson say it's time for Congress to act, and that's just what it appears to be doing. 

Republican leaders in the House, including Speaker Mike Johnson (R–La.), released a statement shortly following the publication of the framework, saying that "House Republicans look forward to working across the aisle to enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader, and provides important protections for American families." Given Democratic representatives' denunciation of Trump's AI Action Plan and Democratic senators' opposition to the previously considered AI moratorium, Johnson's statement may be more aspirational than realistic.