Will AI Kill Our Freedom To Think?
Algorithmic systems increasingly shape what we know, see, and question. To preserve free inquiry, we need transparency, competition, and a commitment to timeless principles of open debate.

The current iteration of AI already edits our emails, sorts our inboxes, and picks the next song we listen to. But convenience is just the start. Soon, the same technology could determine which ideas ever reach your mind—or form within it.
Two possible futures lie ahead. In one, artificial intelligence becomes a shadow censor: Hidden ranking rules will throttle dissent, liability fears will chill speech, default recommendations and flattering prompts will dull our judgment, and people will stop questioning the information they're given. This is algorithmic tyranny.
In the other, AI becomes a partner in truth seeking. It will surface counterarguments, flag open questions, draw on insight far beyond any single mind, and prompt us to check the evidence and sources. Errors will be chipped away, and knowledge will grow. Our freedom to question everything will stay intact and even thrive.
The stakes couldn't be higher. AI currently guides about one-fifth of our waking hours, according to our 2024 time-use analysis. It drafts our contracts, diagnoses our diseases, and even ghostwrites our laws. The principles coded into these systems are becoming the hidden structure that shapes human thought.
Throughout history, governments have banned books, closed newspapers, and silenced critics. As Socrates discovered when sentenced to death for "corrupting the youth," questioning authority has always carried risks. AI's power to shape thought risks continuing one of humanity's oldest patterns of control.
The goal hasn't changed; the method has.
Today, the spectrum of censorship runs from obvious to subtle: China's great firewall directly blocks content to maintain party control; "fact-checking" systems apply labels with the goal of reducing misinformation; organizations make small, "safety-minded" decisions that gradually shrink what we can see; and platforms overmoderate in hopes of appearing responsible. Controversial ideas don't have to be banned when they simply vanish when algorithms, trained to "err on the side of removal," mute anything that looks risky.
The cost of idea suppression is personal. Consider a child whose asthma could improve with an off-label treatment. Even if this medication is successfully used by thousands of people, an AI search may only show "approved" protocols, burying the lifesaving option. Once a few central systems become our standard for truth, people might believe no alternative is worth investigating.
From medicine to finance to politics, invisible boundaries now have the power to shape what we can know and consider. Against these evolving threats stand timeless principles we have to protect and promote.
These include three foundational ideas, articulated by the philosopher John Stuart Mill, for protecting free thought: First, admit humans make mistakes. History's abandoned "truths"—from Earth-centered astronomy to debunked racial hierarchies—prove that no authority escapes error. Second, welcome opposing views. Ideas improve only when challenged by strong counterarguments, and complex issues rarely fit a single perspective. Third, regularly question even accepted truths. Even correct beliefs lose their force unless frequently reexamined.
These three principles—what we call "Mill's Trident"—create a foundation where truth emerges through competition and testing. But this exchange needs active participants, not passive consumers. Studies show we learn better when we ask our own questions rather than just accepting answers. Like Socrates taught, wisdom begins with questions that reveal what we don't know. In this exchange of ideas, the people who question most gain the deepest knowledge.
To keep the free development of thought alive in the AI age, we must translate those timeless principles into practical safeguards. Courts have the power to limit government censorship, and constitutional protections in many democracies are necessary bulwarks to defend free expression. But these legal shields were built to check governments, not to oversee private AI systems that filter what information reaches us.
Meta recently shared the weights—the raw numbers that make up the AI model—for Llama 3. This is a welcome move toward transparency, but Llama 3 still keeps plenty out of view. And even if those were public, the eye-watering amount of money spent on computation puts true replication out of reach for almost everyone. Moreover, many other leading AI systems remain completely closed, and their inner workings are still completely hidden from outside scrutiny.
Open weights help, but transparency alone won't solve the problem. We also need open competition. Every AI system reflects choices about what data matters and what goals to pursue. If one model dominates, those choices set the limits of debate for everyone. We need the ability to compare models side by side, and users must be free to move their attention—and their data—between systems at will. When AI systems compete openly, we can compare them against each other in real time and more easily spot their mistakes.
To truly protect free inquiry moving forward, the principles we value must be built into the technology itself. For this reason, our organizations—the Cosmos Institute and the Foundation for Individual Rights and Expression (FIRE)—are announcing $1 million in grants toward backing open-source AI projects that widen the marketplace of ideas and ensure the future of AI is free.
Think of an AI challenger that pokes holes in your presuppositions and then coaches you forward; or an arena where open, swappable AI models debate in plain view under a live crowd; or a tamper-proof logbook that stamps every answer an AI model gives onto a public ledger, so nothing can be quietly erased and every change is visible to all. We want AI systems that help people discover, question, and debate more, not to stop thinking.
For us as individuals, the most important step is the simplest: Keep asking questions. The pull to let AI become an "autocomplete for life" will feel irresistible. It's up to us to push back on systems that won't show their work and to seek out the unexpected, the overlooked, and the contrarian.
A good AI should sharpen your thinking, not replace it. Your curiosity, not any algorithm, remains the most powerful force for truth.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
SkyNet has arrived.
The invocation of "The current iteration of AI already edits our emails, sorts our inboxes, and picks the next song we listen to." feels more Borg.
I don't use AI for email and it seems like if AI is doing most of your email, it's just shy of spam anyway. Sorting my inbox is always a zero-intelligence chronological stack. Arguable that my spam filter is AI but it's not really sorting and that, again, assumes that people shitting out spam is automatically "my inbox". Similar with my songs. Play order is mine, maybe an RNG is involved but that's specifically not intelligent, maybe an AI is involved in recommendations but typically the recommender systems are pretty unintelligent upvote/rank systems and not "I can tell from the way you prefer certain Vivaldi songs that you, contrary to conventional wisdom, might regard Solieri as a superior composer to Mozart."
None of that is 'intelligent' in any sense , certainly nothing beyond Expert Systems. You could go to AllMusic and put those groupings in a large If then else with the same effect Same with the categories
AllMusic Review
User Reviews
Track Listing
Credits
Releases
Similar Albums
Moods and Themes
None of the leftists here have shown any real independent thought. So as long as the prevailing AI is Wokebot, they will embrace it.
AI thinks twatever AI has been told (coded) to think. Period!
Ass evidence I give you... Elon Musk's "Grok" and South African "Genocide" of whites!
https://www.theguardian.com/technology/2025/may/16/elon-musks-ai-firm-blames-unauthorised-change-for-chatbots-rant-about-white-genocide
Elon Musk’s AI firm blames unauthorised change for chatbot’s rant about ‘white genocide’
xAI’s Grok bot repeatedly referred to widely discredited claim about South Africa that has been touted by Donald Trump
Well, everyone else is using "genocide" in over-the-top and inappropriate ways. If Palestinians are facing genocide, then so are Afrikaners.
"Courts have the power to limit government censorship, and *constitutional* protections OVER ... democracies are necessary bulwarks to defend free expression. But these legal shields were built to check governments, not to oversee private AI systems that filter what information reaches us."
The USA is a *Constitutional* Republic ... NOT a 'democracy'.
The most widely known indoctrinated curse of this nation is the championing that the USA is just a 'democracy' (i.e. The [WE] Identify-as majority mob RULES/STEALS). That premise has literally turned the Halls of Justice into the Halls of Criminals as well as foundations the very division this nation faces.
Yep.. That old-crusty definition of what a USA *is* was important after all. My only hope is it can be restored by the people even while the people are getting promised STOLEN ?free? ponies by Criminals running for Office.
Anecdotally, I'm told by gen z kids I know that their friends are mostly destroyed by AI so far. none of them do any school work at all and none of them know how to think. It's just one data point.
Schools should go back to all assignments being hand written hard copy. That would help I think. Even if they just copied from AI, at least they would have to process the information to some degree.
And no computers or devices in class. Unless it's a class about computers.
I was at the tail end of that being the way things were. I can't imagine what it's like now, only 30 or so years later.
I have taught college for 10 years. I think you are not seeing this rightly. So I teach Biblical Greek and would be THRILLED if someone cared enough to ask ChatGPT a question about Greek.
It is the loss of the dynamism of the intellect and will as shown by Lonergan and Norris Clarke following Marechal and Blondel.
Love of Truth is primarily a religious/spiritual inculcation , which is why Jews so disproportionately win the Nobel Prize.
Nobody stops learning because of new technology. After all they have teachers. I see that meta-analyses show the relative ineffectiveness for learning of e-books. But if you don't love reading you could care less about what you are not reading.
I think he's talking more about using AI to do the assignments, not using it as an advanced search for useful information.
Of course people aren't going to stop learning in general. And good teachers can make a big difference. But I think AI might be killing the traditional model of schooling. Which is probably for the best (even if fairly traditional schooling worked pretty well for me).
Whatever you think of AI and its effects, powers, value, what have you, we are at the beginning of the cycle. We are all using the model T version of what it will eventually be.
Maybe, like the model T, we need someone to walk in front of AI waving a red flag to alert the populace to the oncoming dangers.
Freedom to think? No. Ability to think? Probably.
"The current iteration of AI already edits our emails, sorts our inboxes, and picks the next song we listen to."
Not on my computer. Sorry about yours.
It would do it wrong.
I'd like to see some AIs trained on very different sets of data. What would an AI that was trained only on, say, books published more than 50 years ago be like?
What would an AI that was trained only on, say, books published more than 50 years ago be like?
It would disapprove of Elon's purchase of Twitter, claim it misspoke, and then assert that all the 20 yr. old coders who work with it struggle to keep up with it.
Slightly more seriously: how often do you email people about about Huck Finn or Atticus Finch?
I don't want AI to write my emails. I enjoy writing things myself. This specific example is just about curiosity. And of me becoming an old fart who is suspicious of pretty much anything that has been new within my lifetime.
I was being too obtuse.
You can effectively prompt AIs to behave like this now by simply telling them to only use words from a certain time period or from a specific source or body of work or in a specific style.
Limiting the dataset doesn't make the model any more meaningful, the AI doesn't extract meaning. It just locks you into primarily more 'begats' and Shakespearean/Chaucer-speak and secondarily into a little more niche insight (it will hew more closely and accurately to, e.g., Shakespeare or Chaucer as your topics get more relevant to them) and more esotaeric hallucinations that will be harder to unravel as both everyone from those periods are dead *and*, even if they were alive, not even they necessarily had it figured out.
In any event, there are lots and lots of preconstructed models with different data sets to choose from and places curating and comparing open datasets or models for you to see for yourself how things differ (or not).
Ultimately, the most general purpose and useful AI is the one trained on the largest and/or most useful dataset; which, the vast majority of people aren't emailing or texting each other over the Western Canon.
Want to see a demo of AI lying to you? go to Google and search the following: Is quantitative easing theft? I got a lot of articles telling me the many benefits of it.
I was in AI when it was neural nets, NLP and Expert Systems and I see absolutely no progress toward 'intelligence' so that is the first thing you should be honest about. When you address ChatGPT there is no conscisousness there.
I don't think consciousness and intelligence are the same thing. I would say that the AIs have some kind of intelligence. Just not one that particularly resembles human cognition. Depends on how you define "intelligence" I guess.
The only place I consistently see AI, and ignore it, is when it summarizes amazon reviews. I think it says something like-- some people like this product because it works, while others disagree.
I doubt it has time for the people who give 1 votes because the item arrived late or with a piece missing or something, my all time favorites.
Citations please. Most of your banal examples are algorithms or bots, not AI.
Where do you draw the line? Bots at least are a sort of AI (by contemporary usage). They take natural language inputs and (sometimes) respond in an appropriate manner.
The problem is when it becomes a crutch.
Same way the calculator did. It's a short-cut to answers, and it spares its user the need to know how to get an answer. In doing so, it encourages laziness. What's 15% of your dinner bill? *whip out the 'ol pocket computer.*
It also stymies innovation, and experiment by trial and error. I'll even admit to this one personally. I went through about four loaves of a particular bread I was trying to bake - it rising and then collapsing every time - and it hit me. Hey Gemini, why is this happening? Scale back on the yeast. Ding, the one thing I DIDN'T think of. Next loaf went up perfectly. I'm not a baker, it's just something I do sometimes as a hobby. But instead of honing my skill, I just outsourced for the answer to get my end-product. Somehow it just wasn't as rewarding. Instead of figuring out the nuance of that recipe, I took a shortcut.
I think an interesting ethical question will be when people start trying to take credit for an AI's work. I'm not just talking about college kids spitting out essays - but researchers developing breakthrus where AI did most of the heavy lifting, doctors making astounding diagnoses, lawyers coming up with bulletproof arguments. Should they get the accolades, or should their computer?
Gemini has flat out given bogus information more times than it has helped. Someone on that team needs to exclude reddit from its sources. When I've gone back and done a traditional search for the incorrect information to find the source - half the time it's referencing reddit. It would be more accurate if it used pornhub as a primary source.
I would like to see an exploration of the premise that AI should always (that's the difficult part?) seek out unintended consequences.
Playing with the new AI driven Google search - it's more like the unintended humor of the retarded kid in class than it is intelligent.