Reason.com - Free Minds and Free Markets
Reason logo Reason logo
  • Latest
  • Magazine
    • Current Issue
    • Archives
    • Subscribe
    • Crossword
  • Video
    • Reason TV
    • The Reason Roundtable
    • Free Media
    • The Reason Interview
  • Podcasts
    • All Shows
    • The Reason Roundtable
    • The Reason Interview With Nick Gillespie
    • Freed Up
    • The Soho Forum Debates
  • Volokh
  • Newsletters
  • Donate
    • Donate Online
    • Ways To Give To Reason Foundation
    • Torchbearer Society
    • Planned Giving
  • Subscribe
    • Reason Plus Subscription
    • Print Subscription
    • Gift Subscriptions
    • Subscriber Support

Log In

Create new account

First Amendment

The Federal Government's Crusade Against Anthropic Raises First Amendment Concerns

Trump administration officials openly seek to punish the AI company for its corporate philosophy.

J.D. Tuccille | 3.11.2026 7:00 AM

Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests
The Pengaton bisected with the word "Anthropic" | Illustration: Midjourney/Ivan Cholakov/Dreamstime
(Illustration: Midjourney/Ivan Cholakov/Dreamstime)

Vendors have the right to do business with whoever they please, and to put conditions on the purchase of their goods and services. Buyers have a matching right to choose among vendors, and to enter only deals that serve their purposes. But when government officials go further and use their power to punish private businesses that won't sell them what they want, they may run afoul of constitutionally protected rights. That's the case in the battle between AI firm Anthropic and the Trump administration.

You are reading The Rattler from J.D. Tuccille and Reason. Get more of J.D.'s commentary on government overreach and threats to everyday liberty.

This field is for validation purposes and should be left unchanged.

A Company With Ethical Boundaries

Anthropic is a leading tech company whose AI model, Claude, reportedly played a role in the capture of former Venezuelan dictator Nicolás Maduro. But, like many companies, Anthropic limits the use of its technology for reasons of internal beliefs, public relations, or both. The company's philosophy is based on the idea that AI is potentially dangerous and should be built around "good personal values, being honest, and avoiding actions that are inappropriately dangerous or harmful."

In a February 26 press release, Anthropic CEO Dario Amodei pointed out that his company "chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party" because it didn't want to enable an authoritarian regime. Likewise, "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values" when used by any government, including authorities in the United States.

"Some uses are also simply outside the bounds of what today's technology can safely and reliably do" Amodei added. "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance" and "fully autonomous weapons."

As limitations go, refusing to participate in the creation of a totalitarian police state or the production of killer robots seem reasonable lines to draw. But that doesn't matter, because Anthropic has the right to draw whatever lines it wishes. The U.S. government can then respect those limits or take its shopping needs elsewhere.

But that's not what the federal government did. Instead, the president and his allies threw public temper tantrums over Anthropic telling them "no."

Feds Target a Corporate Philosophy for Punishment

"THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!" President Donald Trump huffed on Truth Social. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology."

"@AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of 'effective altruism,' they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives," sniffed Secretary of Defense War Pete Hegseth. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

In their comments, administration officials made it clear they were punishing Anthropic for its corporate beliefs, not because the company poses an actual danger to the country.

"Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company," Anthropic responded prior to filing a lawsuit against the federal government. "We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government."

Legally unsound and dangerous are good descriptions for a government policy that is openly intended as retaliation against a company for its corporate philosophy.

The First Amendment Is at Stake When Government Punishes Private Ethics

"The claim implicit in the Pentagon's reported demand is…that when national security is invoked, the state's judgment supersedes the moral constraints of the supplier. The company may sell — but only on terms that dissolve its own ethical boundaries," notes Walter Donway for the American Institute for Economic Research's The Daily Economy.

Importantly, the Anthropic–Pentagon dispute raises the issue of whether the government can punish a company for disagreeing with government officials as to what constitutes ethical behavior.

"Though the media is busy framing this as a national security showdown, it actually poses a constitutional concern," warns John Coleman for The Foundation for Individual Rights and Expression (FIRE). "It is a test of whether the federal government can weaponize its contracting power to force a private company to bend the knee."

That doesn't mean the federal government must do business with Anthropic. But it can't forbid federal contractors to use the company's products as a punishment.

"The government's actions, which are designed to harm Anthropic's business, raise serious constitutional concerns, including threats of compelled speech and retaliation against a company for taking positions disfavored by government officials," Coleman added.

FIRE filed an amicus brief with the U.S. District Court of Northern California supporting Anthropic's First Amendment case against the federal government.

It should be noted that Anthropic is not the first company to put conditions on the sale of its products to governments. For years, Barrett Firearms has refused to sell its products to agencies in jurisdictions that don't allow civilians to own large-bore guns. Home Depot had a longtime policy against doing business with the federal government because it didn't want the bureaucratic headaches that came with being classified as a federal contractor; it reversed that policy amid public blowback during the Iraq War.

Nobody, the Defense Department included, should be forced to do business with Anthropic. But, nor should the government be allowed to penalize private parties that refuse to do things they consider wrong.

The Rattler is a weekly newsletter from J.D. Tuccille. If you care about government overreach and tangible threats to everyday liberty, this is for you.

This field is for validation purposes and should be left unchanged.

NEXT: Brickbat: I'll Be Watching You

J.D. Tuccille is a contributing editor at Reason.

First AmendmentArtificial IntelligenceDefenseLaw & GovernmentFederal governmentGovernmentBusiness and IndustryFoundation for Individual Rights and Expression
Share on FacebookShare on XShare on RedditShare by emailPrint friendly versionCopy page URL Add Reason to Google
Media Contact & Reprint Requests

Hide Comments (46)

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.

  1. Mickey Rat   2 months ago

    "It should be noted that Anthropic is not the first company to put conditions on the sale of its products to governments. For years, Barrett Firearms has refused to sell its products to agencies in jurisdictions that don't allow civilians to own large-bore guns. Home Depot had a longtime policy against doing business with the federal government because it didn't want the bureaucratic headaches that came with being classified as a federal contractor; it reversed that policy amid public blowback during the Iraq War."

    Apples to oranges. Those two companies simply refused to be federal contractors. Anthropic wants to be a federal contractor with an exclusive contract for services and dictate policy to its customer. However, a company cannot bully the federal government the way it can an individual person with terms of service.

    "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

    So the federal government put a TOS on other federal contractors not to do business with Anthropic. It is a bit of tit for tat here.

    1. mad.casual   2 months ago

      It is a bit of tit for tat here.

      And that's the favorable view that... especially in Anthropics' case... elections, lives, markets, etc., etc. don't matter. Otherwise, Donald Trump was elected, Secretary Pete served and was appointed while Anthropic and Claude, apparently, can't be trusted to handle a loaded weapon without killing all humans or something (and I say this as someone who has repeatedly pointed out that the No. 1 threat to mankind's survival for the last ~75 yrs. has been other humans).

      I trust that I can be alone in a room with Donald Trump or Pete Hegseth or both, even with a loaded gun, and we all walk out alive. If Anthropic wants to tell me that in a room alone with Pete Hegseth, Donald Trump, their robot, and a loaded gun my odds of survival go down, labeling them a security risk sounds entirely appropriate. Analogously, if some company were manufacturing Pete Hegseth clones made the similar claim, all Pete Hegseths and their manufacturer would be security risks. The issue/question of "supply chain" only matters inasmuch as they provide materially different products elsewhere which, isn't really clear but is somewhat immaterial as long as the distrusting party terminates contracts within scope.

      1. Neutral not Neutered   2 months ago

        I tried to find the exact request made and could not. All I found was statements by Anthropic CEO's version and their lawyers.

        But OpenAI just signed an agreement with the Department of War and the Mass Domestic Surveillance part was firmly put down. See my recent post.

        I would like to see the actual Department of War request for Autonomous weapon development with Anthropic AI.

      2. Squirrelloid   2 months ago

        "I trust that I can be alone in a room with Donald Trump or Pete Hegseth or both, even with a loaded gun, and we all walk out alive. If Anthropic wants to tell me that in a room alone with Pete Hegseth, Donald Trump, their robot, and a loaded gun my odds of survival go down, labeling them a security risk sounds entirely appropriate. "

        This analogy is inaccurate.

        Currently, you can walk into a room with Pete Hegseth, DT, Anthropic's robot, and a loaded gun, and your odds of survival are better than the room without the robot (because the robot cannot currently make autonomous decisions to kill, so if it has the weapon, it's not shooting it).

        The *Department of Defense* is demanding that Anthropic change their robot so that your odds of survival would be lower.

        1. Rick James   2 months ago

          Unfortunately, everyone is going down a fanciful sci-fi Terminator-like rabbit hole of analogies, and that's not required here.

          While I haven't read all (much) of the legalese here, what Anthropic was demanding exactly, what the DOD was demanding exactly I can assure you the language is probably sufficiently complex. And I can also assure that Reason's depiction of it is going to be akin to a CNN report of two teenagers, minding their own business were SUDDENLY ARRESTED!

          Even if Reason is correct(tm) on their overall point, I guarantee their depiction of the details will be so sorely lacking as to be laughable.

          What we need to think about is what AI (as we call it) currently does, how it works, and all the ways it could be deployed by the DOD. AI basically does stuff like summarize documents, can translate text into natural language, recognize images (facial recognition/object recognition etc) and translate/transcribe voice communications.

          Again, I don't know what exactly the legal language was (and we'll not get it here) but what this all sounds like is Anthropic stood up and said, "No using voice/image/object/transcription/facial recognition/summarization technology in service of killing people."

          Unfortunately, that's what the DoW does. So you could probably make a case that anything could be, in some vague way, in service of killing people. If Anthropic does down-stream business with other contractors of DOD, can Anthropic constantly go into a stance where they stop work on one or another project, creating ripple effects across all the DoD suppliers? Very likely.

          1. mad.casual   2 months ago

            Me: On a scale of Moderna COVID vaccine safety to Sig P320 safety how safe would a Claude-powered targeting computer be?
            Claude: This is a fun scale to reason about! Let me break it down:
            The scale endpoints:

            Moderna COVID vaccine: Extremely well-tested, peer-reviewed, with robust safety monitoring. Minor known side effects, very rare serious ones. Broadly considered safe for mass deployment.
            Sig P320: Infamous for its drop-fire issues (firing without trigger pull) — a safety failure in a system specifically designed around controlled, intentional activation.

            Where a Claude-powered targeting computer would land:
            Honestly, it would be off the scale entirely — and not in a good way — but for reasons worth unpacking:
            A weapon targeting computer requires near-zero failure tolerance, deterministic outputs, and no hallucinations. Claude (and LLMs generally) have characteristics that are fundamentally mismatched with that:

            Probabilistic outputs — I generate statistically likely responses, not guaranteed-correct ones. A misidentified target isn't a typo.
            No verifiable reasoning chain — I can't fully audit my own outputs, and neither can anyone else in real time.
            Prompt/context sensitivity — edge cases and unusual inputs can produce unexpected behavior.
            No formal verification — unlike safety-critical software (avionics, medical devices), LLMs can't be mathematically proven to behave within bounds.

            The P320 at least usually worked as intended. An LLM targeting system could be confidently wrong in novel ways.
            The honest answer: Somewhere between "Sig P320 with a hair trigger" and "a category that didn't exist before." The risks aren't just mechanical — they're epistemic.
            This is also a core reason Anthropic supports keeping humans meaningfully in the loop for any high-stakes decisions, especially irreversible ones.
            --
            Again, Claude is out. Per its own "No verifiable reasoning chain", there's a solid case that Anthropic is out even in non-military or combat applications. For instance, their technology can't be relied upon to perform forensic analysis and the liability will always have to fall back or be shifted on to a human.

            Full Disclosure: Weapon selection for the DOD/W has been a clusterfuck of committee groupthink for a long time. Claude *should* fit right in.

            1. Nelson   2 months ago

              That was, surprisingly, a very detailed and well-reasoned post. You make excellent points. Thank you for this post.

    2. ricbee   2 months ago

      You don't mess with DJT

    3. jimwpdx   2 months ago

      The restriction on Anthropic's policy is because of its unwanted effect on the performance of AI. No constitutional conflict.

  2. mad.casual   2 months ago

    Libertarians for obligatory MIC.

  3. charliehall   2 months ago

    When did for profit corporations get First Amendment rights?

    Oh, right. Citizens United. Overturn it.

    1. Sevo, 5-30-24, embarrassment   2 months ago

      When did charliehall find a second brain cell? Oh, right. Never.
      Fuck off and die, asswipe.

      1. Pilate   2 months ago

        Go to your room child. Take it out on your pillow.

        1. Sevo, 5-30-24, embarrassment   2 months ago

          Fuck off and die, TDS-addled asswipe. Or you could (but won't) grow up.

    2. TrickyVic (old school)   2 months ago

      First amendment applies to the government.
      Congress shall make no law.

      As if big money wasn't in politics before that.

    3. Rev Arthur L kuckland (5-30-24 banana republic day)   2 months ago

      Can you sum up in 2 sentances what the citizens united case was about?

  4. Longtobefree   2 months ago

    "Vendors have the right to do business with whoever they please, and to put conditions on the purchase of their goods and services. Buyers have a matching right to choose among vendors, and to enter only deals that serve their purposes."

    And

    "The U.S. government can then respect those limits or take its shopping needs elsewhere."

    Then

    "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology."

    So what's the problem?

    Oh, yeah; Trump.

    1. mad.casual   2 months ago

      Oh, yeah; Trump.

      Generously, 50% TRUMP!, 50% KOMPUTERZ IZ MAJICK!

      It's like the scene from Shang Chi where Ben Kingsley's character is enthralled by The Planet of The Apes except instead of The Planet of The Apes, JD watched the video of the chimpanzee with an AK-47.

      You can practically hear Mama Tuccille in Ben Kingsley's voice saying, "It's not a real monkey, it's computer generated." and JD replying, "A computer generated a monkey *and* taught it how to shoot? Amazing!"

    2. Sir Chips Alot   2 months ago

      exactly....there is no point to this blog post rant, other than TDS

    3. ricbee   2 months ago

      unReasonable has TDS

  5. Archibald Tuttle   2 months ago

    What about the support of terrorism laws? What about Nazi sympathizers during WWII?

    Hasn't this form of viewpoint discrimination always existed in every country?

  6. mad.casual   2 months ago

    "Though the media is busy framing this as a national security showdown, it actually poses a constitutional concern," warns John Coleman for The Foundation WHO SHALL NOT BE NAMED!. "It is a test of whether the federal government can weaponize its contracting power to force a private company to bend the knee."

    Brilliant! Just awesome. FIRE being on Anthropic's side further convinces me that the administration is in the right here.

    Reiterating what I was saying above; I'm pretty sure can walk into a room with Donald Trump, Pete Hegseth, and a loaded gun and we all walk out alive. I'm less certain that if I walked into a room with Donald Trump, Pete Hegseth, any given FIRE employee/lawyer, and a loaded gun, one or all of us walk out alive. Where else they work and provide service without access to firearms or political leaders, I can't say but in this application, I don't trust them.

    To wit, if I post the quote above unedited, even though I'm quoting someone else, I get an "A potentially unsafe operation has been detected in your request to this site
    Your access to this service has been limited. (HTTP response code 403)" because, as I've pointed out previously, I'm naming an organization who shall not be named, even in quote.

    FIRE is not on your side, nor the side of Free Expression. They are a security risk, whether they constitute a supply chain risk is up to the individual user. I wouldn't forbid that conclusion from anyone, private, public, civilian, or government. The Ayatollah roasting in Hell is free to rightly regard FIRE as an enemy of free expression.

    1. BigT   2 months ago

      FIRE is what you get when you have an intolerant ideologically driven organization. They see things as black or white and do not understand that there are shades of grey where the rights of different parties clash and compromise is needed.

      "Vendors have the right to do business with whoever they please, and to put conditions on the purchase of their goods and services."

      But I wanted to use those shoes in a modern art project. Do I need permission?

      It seems to me that if you are using the product or service for your own private purposes the vendor should STFU. If, however, my use of the product can damage the vendor's reputation and the vendor stands to be legally responsible, or it competes with the vendor's business, then they can attach conditions.

      1. Neutral not Neutered   2 months ago

        You missed the point. The request was to develop new technology. Not sell existing like your shoes...

      2. Squirrelloid   2 months ago

        Government has no rights, so what clash of rights are you imagining here?

        1. mad.casual   2 months ago

          Government has no rights, so what clash of rights are you imagining here?

          The clash of one person's religious or free association rights with another's speech or self-defense rights, dumb dumb.

          That's the whole point, FIRE has repeatedly demonstrated that they will void contract rights in order to defend "labor rights" of people consuming tax dollars. Their advocacy posing as defense isn't imaginary. It's their MO.

          Further, they're rather blatant about inserting themselves into cases or distorting the narrative about cases in order to aggrandize themselves or their practice, both escalating trivial cases into Constitutional crises and reframing deliberate criminal acts as merely expressions.

          Even by your (and Anthropic's) own precepts, Claude shouldn't be telling military personnel what to do. Anthropic's side is that they should still get to tell them what to do otherwise. At least some of the people who elected DT/Sec. Pete disagree. FIRE, despite being an uninvolved and neutral party filed an amicus brief on only one side.

          SSDD - FIRE is not neutral, libertarian, or on your side.

    2. MollyGodiva   2 months ago

      FIRE has been a steadfast non-partisan defender of the 1A for three decades. They have pretty much single handlely kept colleges from trampling on 1A rights. No other organization is like them.

  7. Thoritsu   2 months ago

    Reason lost all credibility arguing First Amendment violations when they failed to raise hell on COVID, Hunter Laptop Russiagate censorship. DONE

    1. ricbee   2 months ago

      That's when it lost me

  8. Brett Bellmore   2 months ago

    Wow, pretty enthusiastic about tech companies being control freaks, aren't we?

    Personally, I'm sick up to here with companies selling me stuff, and then trying to dictate its use even after I own it. The difference is that I'm relatively powerless to fight that sort of thing, while picking that fight with the federal government is a bit more of a problem for a company.

  9. Sevo, 5-30-24, embarrassment   2 months ago

    "Legally unsound and dangerous are good descriptions for a government policy that is openly intended as retaliation against a company for its corporate philosophy."
    Tuccille, as a steaming pile of TDS-addled lying shit, assumes his guess as to the reasoning has some value.
    No, fuckwad, it doesn't matter in the least. Fuck off and die.

    1. Pilate   2 months ago

      You again! You must need a diaper change.

      1. Sevo, 5-30-24, embarrassment   2 months ago

        You again? You must need a reaming with a barb-wire-wrapped baseball bat.
        Fuck off and die, shitstain.

  10. Social Justice is neither   2 months ago

    So the company set it's terms, the government rejected them and since it's a single source contract they cannot commit to a rejected vendor in other areas and there might be a conflict with the contract under discussion. What's the problem? Is the government not allowed to choose vendors or set the terms of it's contract?

    1. Sir Chips Alot   2 months ago

      you are trying to bring logic to Reason's TDS

  11. Pilate   2 months ago

    I'm inclined to generally agree with the author. I'd not however the assertion - "Vendors have the right to do business with whoever they please, and to put conditions on the purchase of their goods and services." Apparently not if the vendor is a local bakery and the customer wants a big cock cake for a gay marriage.

    There's also a pettiness of style I find amateurish and silly; using adjectives to characterize a quote as in "Donald Trump huffed on Truth Social" and "sniffed Secretary of Defense War Pete Hegseth."

    There's a mean-girl quality to this sort or reporting.

    1. Mickey Rat   2 months ago

      The difference is the baker was refusing a contract to make a cake to be used for a gay wedding. What Anthropic is doing is more accepting the contract to bake a cake, then saying to the customer that it cannot be used for a gay wedding.

      1. Squirrelloid   2 months ago

        No. It would be like Anthropic is willing to bake a standard cake, but the government is demanding something on top of the cake that Anthropic is unwilling to make.

        If it was as simple as Anthropic being able to do it already, the government wouldn't be demanding Anthropic do it, they'd just use it how they saw fit. It *cannot* currently be used in the way the government wants, so they're demanding changes.

      2. Neutral not Neutered   2 months ago

        This is the problem. We do not have the actual request in front of us.

        Did the Department of War request advanced products to be developed which Anthropic refused or did the Department of War ask for the existing AI to do something it is not capable of?

        And since OpenAI signed a contract which I assume was what was asked of Anthropic, the language removes any potential for Mass domestic surveillance usage which contradicts the Anthropic CEO's statement.

        1. Squirrelloid   2 months ago

          There's nothing stopping the government from going after Anthropic for something, and then turning around and signing a deal Anthropic would be willing to sign with their competitor. So that doesn't contradict what Anthropic has said at all. It's inconsistent on DoD's part, but nothing stops the government from being inconsistent.

          We're going to have to wait for the lawsuit to see what the government was actually telling Anthropic.

    2. Neutral not Neutered   2 months ago

      You have given it too much credit, calling it reporting.

      No citation of the actual request by the Department of War was provided.

      Again reason publishes one side of the story written in a biased style.

    3. Sevo, 5-30-24, embarrassment   2 months ago

      "I'm inclined to generally agree with the author..."

      That's because you're a steaming pile of lying TDS-addled shit.

  12. Neutral not Neutered   2 months ago

    All I could find was Anthropic's response and statements regarding what was asked of them. No actual requested as actually worded by the Department of War has been cited in media.

    I dug a bit and found that OpenAI signed an agreement for deploying advanced AI systems in classified environments.

    In my opinion this is what was asked of Anthropic.

    And in the agreement with OpenAI:

    https://openai.com/index/our-agreement-with-the-department-of-war/

    "Throughout our discussions, the Department made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance. To make our principles as clear as possible, we worked together to add additional language to our agreement.

    This language makes explicit that our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information. The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement."

    So this contradicts the comments by Anthropic AI's CEO suggesting the Department of War was expecting to use it for "Mass domestic surveillance"

    1. Social Justice is neither   2 months ago

      I'm sure these are the same assurances provided by the NSA and Snowden showed us exactly what that assurance was worth.

    2. Squirrelloid   2 months ago

      https://www.politico.com/news/2026/02/26/incoherent-hegseths-anthropic-ultimatum-confounds-ai-policymakers-00800135

      It's unclear exactly which ethical concerns Hegseth was demanding Anthropic abandon. The two (mass surveillance, autonomous weapons) are just given as examples of those ethical red lines.

      From the article:
      "Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning — give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.

      But Hegseth simultaneously threatened to invoke the Cold War-era Defense Production Act to compel Anthropic to work with the Defense Department and nix the company’s ethical red lines, which include restrictions on using Claude to surveil U.S. citizens or empower autonomous weapons. The government used the law during the Covid-19 pandemic to accelerate production of medical supplies and vaccines."

      But it shouldn't matter which red lines they specifically wanted to cross. They demanded unfettered access (which would require removing prohibitions on those things, too). And any ethical red lines Anthropic has set should be treated with the same seriousness.

    3. Squirrelloid   2 months ago

      Ah, it is specifically those two red lines at issue. See here: https://www.bbc.com/news/articles/cvg3vlzzkqeo

      "An Anthropic spokeswoman added on Thursday that while the company received updated wording from the DoD for its contract on Wednesday night, it represented "virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

      "New language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will," she said. "Despite [the Department of War's] recent public statements, these narrow safeguards have been the crux of our negotiations for months." "

      Why they turned around and signed a contract with OpenAI that presumably respects those same ethical considerations is something only Hegseth or his DoD underlings can answer.

Please log in to post comments

Mute this user?

  • Mute User
  • Cancel

Ban this user?

  • Ban User
  • Cancel

Un-ban this user?

  • Un-ban User
  • Cancel

Nuke this user?

  • Nuke User
  • Cancel

Un-nuke this user?

  • Un-nuke User
  • Cancel

Flag this comment?

  • Flag Comment
  • Cancel

Un-flag this comment?

  • Un-flag Comment
  • Cancel

Latest

The James Comey Indictment Looks Like Vindictive Prosecution

Joe Lancaster | 4.29.2026 5:45 PM

Is Reason's Video on Climate Change Alarmism a 'Masterclass in Manipulation'?

Aaron Brown | 4.29.2026 5:30 PM

Lindsey Graham Wants You To Pay $400 Million for Trump's New Ballroom

Robby Soave | 4.29.2026 4:46 PM

Have Trump's Tariffs Brought Manufacturing Jobs Back to America? New Study Says No.

Eric Boehm | 4.29.2026 3:40 PM

This Supreme Court Case Could Determine the Fate of More Than 1.3 Million Migrants

Autumn Billings | 4.29.2026 2:54 PM

Recommended

  • About
  • Browse Topics
  • Events
  • Staff
  • Jobs
  • Donate
  • Advertise
  • Subscribe
  • Contact
  • Media
  • Shop
  • Amazon
Reason Facebook@reason on XReason InstagramReason TikTokReason YoutubeApple PodcastsReason on FlipboardReason RSS Add Reason to Google

© 2026 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

r

I WANT FREE MINDS AND FREE MARKETS!

Help Reason push back with more of the fact-based reporting we do best. Your support means more reporters, more investigations, and more coverage.

Make a donation today! No thanks
r

I WANT TO FUND FREE MINDS AND FREE MARKETS

Every dollar I give helps to fund more journalists, more videos, and more amazing stories that celebrate liberty.

Yes! I want to put my money where your mouth is! Not interested
r

SUPPORT HONEST JOURNALISM

So much of the media tries telling you what to think. Support journalism that helps you to think for yourself.

I’ll donate to Reason right now! No thanks
r

PUSH BACK

Push back against misleading media lies and bad ideas. Support Reason’s journalism today.

My donation today will help Reason push back! Not today
r

HELP KEEP MEDIA FREE & FEARLESS

Back journalism committed to transparency, independence, and intellectual honesty.

Yes, I’ll donate to Reason today! No thanks
r

STAND FOR FREE MINDS

Support journalism that challenges central planning, big government overreach, and creeping socialism.

Yes, I’ll support Reason today! No thanks
r

PUSH BACK AGAINST SOCIALIST IDEAS

Support journalism that exposes bad economics, failed policies, and threats to open markets.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BAD IDEAS WITH FACTS

Back independent media that examines the real-world consequences of socialist policies.

Yes, I’ll donate to Reason today! No thanks
r

BAD ECONOMIC IDEAS ARE EVERYWHERE. LET’S FIGHT BACK.

Support journalism that challenges government overreach with rational analysis and clear reasoning.

Yes, I’ll donate to Reason today! No thanks
r

JOIN THE FIGHT FOR FREEDOM

Support journalism that challenges centralized power and defends individual liberty.

Yes, I’ll donate to Reason today! No thanks
r

BACK JOURNALISM THAT PUSHES BACK AGAINST SOCIALISM

Your support helps expose the real-world costs of socialist policy proposals—and highlight better alternatives.

Yes, I’ll donate to Reason today! No thanks
r

FIGHT BACK AGAINST BAD ECONOMICS.

Donate today to fuel reporting that exposes the real costs of heavy-handed government.

Yes, I’ll donate to Reason today! No thanks