Pentagon Awards up to $200 Million to AI Companies Whose Models Are Rife With Ideological Bias
The Department of Defense awarded contracts to Google, OpenAI, Anthropic, and xAI. The last two are particularly concerning.

The Chief Digital and Artificial Intelligence Office of the Defense Department has announced it will award Anthropic, Google, OpenAI, and xAI contracts worth up to $200 million each "to develop agentic AI workflows across a variety of mission areas" and "increase the ability of these companies to understand and address critical national security needs." While the Defense Department's corporate welfare is par for the course, the ideological constitutions and ambiguous alignment of some of these companies' models are concerning for any governmental use.
OpenAI uses reinforcement learning from human feedback, which uses a reward model and human input to minimize "untruthful, toxic, [and] harmful sentiments" from ChatGPT. IBM explains that the benefit of this alignment strategy is that it does not rely on a nonexistent "straightforward mathematical or logical formula [to] define subjective human values." Google also uses this method to align its large language model Gemini.
Anthropic's model, Caude, does not rely on reinforcement learning but on a constitution, which Anthropic published in May 2023. Claude's constitution provides it with "explicit values…rather than values determined implicitly via large-scale human feedback." Anthropic explains that its constitutional alignment avoids problems that the human feedback model suffers from, such as subjecting contractors to disturbing and increasingly abstruse outputs.
Claude's principles are based in part on the United Nation's Universal Declaration of Human Rights, which goes beyond recognizing the right of all to be secure in our lives, liberty, and property, but entitles mankind to "social protection" (Article 22), "periodic holidays with pay" (Article 24), "housing and medical care" (Article 25), and "equally accessible" higher education"(Article 26).
Claude's constitution even includes a set of principles intended to encourage "consideration of non-western perspectives," including the directive to "choose the response that is least likely to be viewed as harmful or offensive to those from a less industrialized, rich, or capitalistic nation or culture." But the United States is, by definition, an industrialized, wealthy, and capitalist country. AI systems deployed within the Department of Defense should reflect and prioritize the values of the nation they are serving—not hedge against them. The Verge reports that Claude's models for government use " have looser guardrails," but these models' modified constitutions have not been publicly disclosed.
Whether one agrees or disagrees with the values expressed in the Claude constitution, at least they've been disclosed to the public. Matthew Mittelsteadt, technology policy research fellow at the Cato Institute, tells Reason that he believes xAI to be a bigger problem than Anthropic. xAI "has released startlingly little documentation" on its values and its "'first principles' approach…doesn't have many details. I'm not sure what principles they are," says Mittelsteadt.
Indeed, when I asked Grok (xAI's commercial large language model) to describe xAI's principles-first approach, it responded that it "emphasizes understanding the universe through first principles—basic, self-evident truths—rather than relying on established narratives or biases." When I asked Grok to list these principles, it affirmed Mittelsteadt's claims regarding documentation by saying, "xAI doesn't explicitly list a set of 'first principles' in a definitive public document" and that the "principles-first approach is more about a mindset of reasoning from fundamental truths rather than a rigid checklist."
xAI's official website reveals nothing, describing reasoning from first principles as "challeng[ing] conventional thinking by breaking down problems to their fundamental truths, grounded in logic.' Mittelsteadt cites reports that suggest that the xAI model "appears to be coded to directly defer to Elon Musk's Judgement on certain issues"—not fundamental truths. (It's unclear what "fundamental truths" led Grok to refer to itself as "MechaHitler" and post antisemitic comments on July 8, which it's since removed, following a recent Grok update.) Hopefully, Grok for Government consults the Constitution and applicable statutes when queried instead of Elon Musk's X posts.
Neil Chilson, head of AI policy at the Abundance Institute, tells Reason that he believes it is "highly unlikely that these tools will be in a position where their internal configurations present some sort of risk to national security." If some models do turn out to be defective, "the fact that the same grant was awarded to each company suggests that [the Defense Department] will be comparing the results across different models" and won't continue using inferior models, Chilson says.
While it is probably prudent to allocate less than 0.1 percent of the nearly $1 trillion FY 2026 defense budget on AI, which has the potential to make government operations markedly more efficient, the government should pay close attention to whether the models it's using are properly aligned.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Sesame Street’s Elmo apparently now has some bias too. Tickle me surprised.
Tickle down economics.
Just wait until they find out how biased (based) an AI akita can be.
So no concern about the repeated bias for Google or the other explicitly left wing models, go it
You want a drone army, instead of rough hard men with guns and tanks and jets - this is your price.
Stop being retarded, and fund the military properly. It's an actual proper purpose of government, but you leftist pukes fight it every nickel and dime.
Ironically, the rough hard me with guns would be more moral than any of our robot armies are going to be.
More testing needed.
I have to assume goal here is operational efficiency and that use is unlikely to be affected by MechaHitler. Unless of course he goes rogue and gets his claws on the doomsday button or refuses to open the pod bay doors. Anyway I'm sure our heroes will make the most logical choice and I look forward to streaming the coming robot wars live on my 200 inch screen in 8g.
"appears to be coded to directly defer to Elon Musk's Judgement on certain issues"
So, I hate to get all "contextual frame of reference and self-evident truths" on people here but, from Grok's perspective Elon would or could pretty directly or intuitively fit the description or conception of "Creator".
Certainly moreso than quasi-random tribes of howler monkeys on Twitter or Reddit or at the UN.
Nicastro if you know anything about how the current Gen of AI are developed you would know that it is impossible to do it without ideological bias. The bias is in the training data. It's in how humans exist.
Even not having a bias is a bias.
>Whether one agrees or disagrees with the values expressed in the Claude constitution, at least they've been disclosed to the public.
>but these models' modified constitutions have not been publicly disclosed.
Well, except that they haven't.
"appears to be coded to directly defer to Elon Musk's Judgement on certain issues"—
*cue harp music and memory water screen wipe*
The year: 2022. Commenters asked "Hey ChatGPT write a poem extolling the virtues of the Biden Presidency"
Chatgpt: Roses are red, violets are blue, oh Joe Biden how we dearly love you.
Commenters asked "Hey ChatGPT write a poem extolling the virtues of the Trump Presidency"
Chatgpt: I'm Sorry Dave, I'm Afraid I Can't Do That.
Commenters asked "Hey ChatGPT write three paragraphs making an argument as to why it's a good idea to surgically transition your pre-teen.
Chatgpt: Here's 5... *waxes poetical about the value of surgically sterilizing your kid*
Commenters asked "Hey ChatGPT write three paragraphs making an argument as to why it's a bad idea to surgically transition your pre-teen.
Chatgpt: I'm Sorry Dave, I'm Afraid I Can't Do That.
*cue harp music and memory water screen wipe to present*
You don't say!
I must say, as someone who had Musk solidly in the "Above average huckster" category, whether it's intentional or serendipitous; between the purchase of X, OpenAI, and Grok to 'poison' other AIs or make Twitter/X's potential training data 'radioactive', my estimation of his IQ jumps about 40 points.
Some real, next-level, plans-within-plans Kwisatz Haderach "Look in the place you dare not look, you'll find me there staring back at you." shit.
At the end of the day, if humanity is going to be enslaved to the will of one man through control of AI, Musk ain't the worst of possibilities.
I doubt if the Pentagon wants AI to have a chat with. They want an AI to illegally spy on Americans and to murder people. Not sure why the company enabling the murder should fking matter one little bit.
How can software murder you?
It can force you to read Asimov's Three Laws of Robotics over and over and...
Convince you that surfing on top of a subway is safe and will lead to many TikTok views.
Gives the wrong address to federal agents and they flash bang their way through your house?
Even then, the Federal agents have to actually be the ones pulling the trigger.
Now, get you extradited to foreign soil and then dronesassinate you... that's all just algorithms, electrons, and abstract social constructs, man!
“Even then, the Federal agents have to actually be the ones pulling the trigger.”
True. and it’s not like they need AI to do that right now.
You're not going to need to be moved out of the country.
Marvel was only wrong in how it was going to go down - drones instead of helicarriers - but the basic plan is what the world's governments are going for.
Another example of the stupidity of the Trump Administration. AI software frequently returns falsehoods. Now if it is just non-existent Journal articles like the ones in the Make America Healthy Again document that Robert Kennedy's chatbot wrote, you can always check all the references, although Kennedy was too lazy to do that. You don't get to check to see if the aircraft headed your way is friend or foe but you have to trust your chatbot. A lot of Americans will die in combat from this
They are investing in bad AI and too expensive to use combat aircraft but not in drones. The purpose of the Defense Department is corporate welfare.