Anthropic CEO Refuses Pentagon Demands To Remove Safeguards on Military AI
Dario Amodei penned a public letter explaining the danger of the Defense Department's request to remove certain constraints from Claude, and refusing them outright.
A battle is brewing inside the Pentagon that could determine the future of American military strategy.
On Tuesday, Defense Secretary Pete Hegseth pledged to cut ties with Anthropic—one of the two AI providers authorized by the Pentagon for classified use—unless the company removed all safeguards from Claude by Friday. This comes after a January memo, in which Hegseth directed the department to only "utilize [AI] models free from usage policy constraints that may limit lawful military applications." On Thursday, Anthropic CEO Dario Amodei refused Hegseth's ultimatum.
In a statement published Thursday night, Amodei said Anthropic would not accommodate the Department of Defense's request to remove the safeguards on its AI model because, "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
In his letter, Amodei grants that "the Department of War, not private companies, makes military decisions." However, Amodei refused to capitulate to Hegseth's demands, saying that "frontier AI systems are simply not reliable enough to power fully autonomous weapons," and "mass domestic surveillance is incompatible with democratic values."
Amodei's response is not surprising. He's long warned that AI can be used nefariously and repeatedly advocated for regulation. Ironically, the very government Amodei trusted to ensure AI safety is now looking to weaponize the technology.
Unlike Amodei's previous calls for government intervention, which would have insulated Anthropic from competition, this decision threatens Anthropic's competitiveness.
On Tuesday, Hegseth threatened to label Anthropic a supply chain risk in the event of noncompliance. This "would ban all other DoD suppliers…from using Anthropic in their fulfillment of DoD contracts," explains Dean Ball, a senior fellow at the Foundation for American Innovation who served as senior policy adviser for AI and emerging technology at the Office of Science and Technology Policy in 2025.
More disturbing still is Hegseth's invocation of the Defense Production Act (DPA), which "confers upon the President a broad set of authorities to influence domestic industry in the interest of national defense."
Among these authorities are Titles I, III, and VII. Title III grants the president the authority to subsidize certain industries via loans and purchase commitments, while Title VII allows the president to compel information from companies. Title I "is a more straightforwardly Soviet power," says Ball, and gives the government the authority "to directly command the production of industrial goods." With this power, the Defense Department "intends to…command Anthropic to make a version of Claude that can choose to kill people without any human oversight," he says.
Hegseth's demands vindicate Amodei's mid-February warning to New York Times columnist Ross Douthat that AI can be used to undermine constitutional rights and AI. He expressed particular concern about AI rendering public surveillance hyperlegible, empowering the government to efficiently parse through and act on what's currently an overwhelming amount of data. This would "make a mockery of the Fourth Amendment by…finding technical ways around it," he said.
On Thursday, Under Secretary of Defense Emil Michael blithely dismissed Amodei's concerns: "Mass surveillance violating the 4th Amendment…is illegal which is why the @DeptofWar would never do it." But an activity's illegality does not mean the government won't engage in it. In this case, it already has. Immigration and Customs Enforcement, for instance, has been leveraging AI-powered technology for domestic surveillance, explains Reason's Autumn Billings.
It's unclear how this situation will be resolved. Anthropic could forfeit its multimillion-dollar Pentagon contract, lose other business due to its designation as a supply chain risk, or even be nationalized by the feds. Still, Amodei can rest easy knowing that he has taken a stand for privacy and moral responsibility.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please to post comments
Still, Amodei can rest easy knowing that he has taken a stand for privacy and moral responsibility.
Is that what Lando Calrissian did when he gave Han Solo to Vader? Standing for moral responsibility?
Since Pete specifically stipulated lawful military operation, which is what the contract Anthropic celebrated supporting, wouldn't that at least make him more of the Darth Vader who threw Palpatine into the Death Star's reactor core? If not make Amodei the one who is now trying to change the deal?
Seems like SSDD with tech companies where they claim to have morals and that their technology is inherently virtuous and then, when someone wants to use their technology for any purpose the tech companies themselves consider to be immoral (whether it actually is or not), suddenly their technology isn't the paragon of inherent moral virtue they advertised it as and has to be controlled by them or they be allowed to take the the ball that they gave to everyone freely back and go home.
The technology is neither moral or immoral. An application of the technology can be moral or immoral. Anthropic doesn't want to be seen developing technology used for certain purposes. That's its right. The gov't can develop its own or buy something else.
The issue here is that Anthropic was granted a monopolistic contract, and is trying to dictate usage to the customer after the fact.
Pretty sure Anthropic is not able to enforce a monopoly on the gov't. If the military doesn't want to sign any other deals, that's on it.
Anthropic has always had terms limiting use of its models. I read the complete terms a year or so ago and found them logical but limiting. But that's its prerogative.
I am sure that other AI vendors will be happy to sell to the Trump Administration. They won't do any better at making the AI more reliable, though. Some crackpot will be able trick the AI software into thinking (correctly) that the biggest threat to the US lives in Mar-A-Lago, and the AI-guided munitions will destroy it. Hegseth is in way over his head here.
Can we all acknowledge that having AIs able to kill people without human oversight is a bad idea? Autonomous killer drones with no human failsafes on using lethal force is the definition of a bad idea.
As for mass surveillance, I fear that ship sailed years ago.
Killbots give Hegseth a boner his fevered brain can barely process. His continual "little man" syndrome is only outshadowed by Trump's ego.
According to Kegseth, Anthropic's AI is simultaneously critical for national security (threatening to use Defense Production Act) and a national security risk (threatening to label it a supply chain risk). No contradiction there, just a hint of "nice product you have there...".
Trying to make sense of this.
The DOD thinks they need a product like Anthropic's, but with the guardrails if any dictated by the DOJ, not Anthropic.
If Anthropic stands firm on having discretion to decide the guardrails contrary to DOJ preferences, then use of it by other DOD vendors is risky because it may 'refuse' to facilitate those vendors fulfilling DOD requirements. It puts Anthropic in a position to frustrate the DOD's aims with regards to any vendor that uses it.
Anthropic has TOS and usage rules here.
https://www.anthropic.com/legal/commercial-terms
https://www.anthropic.com/legal/aup
This is what companies sign. If they abide by these rules, they're fine. If they break these rules, they can be banned. What exactly is the risk?
The Defense Production Act of 1950: History, Authorities, and Considerations for Congress
Perhaps when you're an AI company selling to the Defense Department, it might behoove your legal team to peruse the Jones Act AND the Defense Production Act of 1950 before you start whinging to the New York Times about these unprecedented demands from government you're willingly selling your totally boss, whiz-bang it's-gonna-make-everything-better large-language machine learning model.
I can't even get large language models to write simple error free computer code to analyze simple data. And Hegseth thinks they are ready for military use???
I'm no expert but this looks like an ordinary contract dispute. If Anthropic doesn't like the customer's contract demands they are free to sell their products elsewhere.
Didn't Anthropic's AI just help a hacker conduct a massive hack of Mexico, which this clown said would be impossible?
https://cybernews.com/security/claude-ai-mexico-government-hack/
Not sure I take him seriously.
Someone got a model to do something it wasn't intended to do. That makes Anthropic about as guilty as Toyota when you use a 4Runner to do a drive-by. And obviously it was against Anthropic's TOS. Not sure what you expect of them.
Just once I'd like to see Reason in their next article celebrating this new totally-liberatory bullshit-hassle-bummer-head-trip international-borders-smashing Silicon Valley Next Big Thing spend at least one sentence chin-scratching about how the next GOP administration might abuse it... that way you'll save Zen Master Rick James 30 seconds of typing "We'll see" in the comment section.
Good on Anthropic.