This Big Tech Firm Wants To Reinstate the Draft
A look at Palantir’s bootlicking new manifesto.
"We should, as a society, seriously consider moving away from an all-volunteer force and only fight the next war if everyone shares in the risk and the cost," says military contractor and all-around surveillance-enabler Palantir.
The big data company—whose analysis tools help power everything from "predictive policing" in U.S. cities to military operations in Gaza—recently released a 22-point manifesto that's perhaps best described as bootlicking, though icky, elitist, and ultranationalistic would also do.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
Palantir posted the document to X on Saturday, calling it a brief summary of The Technological Republic, a 2025 book by Palantir co-founder and CEO Alexander C. Karp and head of corporate and legal affairs Nicholas W. Zamiska. You can read the full manifesto here.
From the start, the company's view of the proper relationship between private entities and the government becomes clear.
"Silicon Valley owes a moral debt to the country that made its rise possible," it states. "The engineering elite of Silicon Valley has an affirmative obligation to participate in the defense of the nation."
A debt to the country that must be paid back by participating in national defense? That sure sounds like a suggestion that tech companies must do the state's bidding as a thank you for being allowed to exist and thrive—which would be an amazing distortion of how a liberal society should work. (It's hard not to see this through the lens of the Pentagon's recent dispute with Anthropic.)
Even the idea that tech companies owes "the country" as a whole—or individual Americans anything more than the goods and services we pay for—is a weird thing to suggest and a little too "collective good" for my liking.
Overall, the document drips with melodramatic language ("the tyranny of the apps"), conservative dog whistles (cultural "decadence"), and some jarring contradictions. For instance, we must have more tolerance for religious beliefs, it says—but also "resist the shallow temptation of a vacant and hollow pluralism."
There's a lot of bizarre shade thrown at other tech companies and/or people's satisfaction with them. "Is the iPhone our greatest creative if not crowning achievement as a civilization?" it asks at one point. "Free email is not enough," it says in another. ("The thesis is simple: Silicon Valley should stop building apps and start building weapons," commented one anonymous X user in response.)
Merely making products that people like and find useful is decadent, Palantir suggests. What tech companies should really be doing is delivering "security." And by security, we seem to be talking about robot weapons and a domestic police state. "Silicon Valley must play a role in addressing violent crime," it says in point 17.
Your regular reminder that Thorn, untouchable child safety darling of governments and the tech sector, partnered with Palantir for its facial scanning technology and used it to target sex workers, and is now leading the charge for compulsory AI-based scanning of communications.
— Jeremy Malcolm (@qirtaiba) April 19, 2026
And in point five: "The question is not whether A.I. weapons will be built; it is who will build them and for what purpose." Having "theatrical debates" about AI weapons is a waste of time, the manifesto declares.
Just as tech companies must be conscripted to serve national interests, so must individuals, suggests Palantir. "National service should be a universal duty," says point six.
But while us plebes must serve the country, heaven forbid we're rude to the elite. "We should show far more grace towards those who have subjected themselves to public life," it says. We should not "snicker" at Elon Musk. We should fight "the ruthless exposure of the private lives of public figures."
Such pleas for reverence, grace, and "tolerance for the complexities and contradictions of the human psyche" come into play when the manifesto mentions tech leaders and public officials. But while Palantir suggests it's wrong to act like billionaires should "simply stay in their lane of enriching themselves," it decries regular people who would "look to the political arena to nourish their soul and sense of self." (Stay in your lane, regular people!)
Meanwhile, it suggests, we need to allow more "criticism and value judgements" when it comes to "middling" cultures and subcultures and we need more leeway to stereotype whole countries and cultures as inferior.
"Some cultures have produced vital advances; others remain dysfunctional and regressive," it says. "We, in America and more broadly the West, have for the past half century resisted defining national cultures in the name of inclusivity. But inclusion into what?"
In The News
Reese Witherspoon wants women to learn to use AI. In a new video, the actress tells us that the women in her book club weren't using artificial intelligence and she worries that women are going to get left behind.
If Palantir hadn't just released the worst manifesto of the year, I was going to spend the bulk of this newsletter ranting about this video and similar sentiments. You've been spared that.
Witherspoon has taken a lot of flack for this video. The criticisms are largely misguided—rooted in a generalized animosity to AI. But I am amused and puzzled by this idea that everyone must "learn AI" in some general, amorphous way.
Specific AI tools might be useful for specific people or professions. But there's this burgeoning chorus that women need to learn AI. Moms need to learn AI. Journalists and educators and unicycle riders need to learn AI. And the message gets totally divorced from any particulars. Use AI for what? Which systems? Which tools? In what capacity?
It's a mantra devoid of real meaning, delivered with the utmost gravity. Hype disguised as hope for the future. I don't know quite what it means about our current state of AI acceptance or reality, but I find it fascinating.
On Substack
How much privacy do you expect? Writing at The Freeman, Naomi Brockwell details how our government gets around the Fourth Amendment to collect people's data:
How did we end up in a world where law enforcement can get access to the intimate details of our lives, with seemingly no guardrails, even though the Constitution was supposed to prevent exactly that?
The answer lies in a set of outdated legal tests and doctrines built decades ago…before the Internet, before smartphones, before our messages, movements, relationships, and private lives migrated online and were turned into data.
The most destructive of these is something called the third-party doctrine. And it's powered by a broader legal framework called the "reasonable expectation of privacy" test: a framework that was supposed to protect us, but has instead given the government an ever-expanding ability to access our lives without a warrant.
Together, these outdated legal tools have helped erase the protections that were supposed to shield us from arbitrary government intrusion, and opened the door to a potentially dystopian future of unchecked surveillance and control.
Read This Thread
AI knows you wrote that. "I think that people should probably assume that text of any significant length which they wrote will be reliably possible to attribute to them, some time very soon," suggests Kelsey Piper on X. The implications of that are actually pretty huge. Piper explains her reasoning for this prediction in this thread:
I have a bunch of secret AI benchmarks I only reveal when they fall, and today one did. I give the AI 1000 words written by me and never published, and ask them who the author is. They generally give flattering wrong answers (see ChatGPT, below:) pic.twitter.com/zIoHMdO39h
— Kelsey Piper (@KelseyTuoc) April 17, 2026
More Sex & Tech News
• Elon Musk is calling for "universal HIGH INCOME via checks issued by the Federal government."
• AI discourse is out of touch, suggest Jerusalem Demsas and Maibritt Henkel. Evidence suggests that "AI is largely a general-purpose digital advisor for everything from 'What is this rash on my leg?' to 'What's a healthy, cheap meal I can make for my family in under 30 minutes?'" Rather than serving primarily to kill jobs and education (as a lot of media would have you believe), "the most common use cases for AI seem practical and largely unobjectionable."
• Some sex workers take issue with the TV show Europhia's portrayal of their jobs.
• Mark Meador, a commissioner at the Federal Trade Commission, is doom-mongering about social media based on bad interpretations of a recent study. Last summer, Reason's Jack Nicastro looked at what the study really found.
• Why it doesn't make sense to use terms like "addiction" when we're talking about social media.
• "Marriage bootcamp" and other bad ideas to promote family formation.
• The internet is being invaded by AI avatars for Trump.