Artificial Intelligence

Will AI Kill Our Freedom To Think?

Algorithmic systems increasingly shape what we know, see, and question. To preserve free inquiry, we need transparency, competition, and a commitment to timeless principles of open debate.

|

The current iteration of AI already edits our emails, sorts our inboxes, and picks the next song we listen to. But convenience is just the start. Soon, the same technology could determine which ideas ever reach your mind—or form within it. 

Two possible futures lie ahead. In one, artificial intelligence becomes a shadow censor: Hidden ranking rules will throttle dissent, liability fears will chill speech, default recommendations and flattering prompts will dull our judgment, and people will stop questioning the information they're given. This is algorithmic tyranny. 

In the other, AI becomes a partner in truth seeking. It will surface counterarguments, flag open questions, draw on insight far beyond any single mind, and prompt us to check the evidence and sources. Errors will be chipped away, and knowledge will grow. Our freedom to question everything will stay intact and even thrive. 

The stakes couldn't be higher. AI currently guides about one-fifth of our waking hours, according to our 2024 time-use analysis. It drafts our contracts, diagnoses our diseases, and even ghostwrites our laws. The principles coded into these systems are becoming the hidden structure that shapes human thought. 

Throughout history, governments have banned books, closed newspapers, and silenced critics. As Socrates discovered when sentenced to death for "corrupting the youth," questioning authority has always carried risks. AI's power to shape thought risks continuing one of humanity's oldest patterns of control. 

The goal hasn't changed; the method has. 

Today, the spectrum of censorship runs from obvious to subtle: China's great firewall directly blocks content to maintain party control; "fact-checking" systems apply labels with the goal of reducing misinformation; organizations make small, "safety-minded" decisions that gradually shrink what we can see; and platforms overmoderate in hopes of appearing responsible. Controversial ideas don't have to be banned when they simply vanish when algorithms, trained to "err on the side of removal," mute anything that looks risky.

The cost of idea suppression is personal. Consider a child whose asthma could improve with an off-label treatment. Even if this medication is successfully used by thousands of people, an AI search may only show "approved" protocols, burying the lifesaving option. Once a few central systems become our standard for truth, people might believe no alternative is worth investigating. 

From medicine to finance to politics, invisible boundaries now have the power to shape what we can know and consider. Against these evolving threats stand timeless principles we have to protect and promote. 

These include three foundational ideas, articulated by the philosopher John Stuart Mill, for protecting free thought: First, admit humans make mistakes. History's abandoned "truths"—from Earth-centered astronomy to debunked racial hierarchies—prove that no authority escapes error. Second, welcome opposing views. Ideas improve only when challenged by strong counterarguments, and complex issues rarely fit a single perspective. Third, regularly question even accepted truths. Even correct beliefs lose their force unless frequently reexamined. 

These three principles—what we call "Mill's Trident"—create a foundation where truth emerges through competition and testing. But this exchange needs active participants, not passive consumers. Studies show we learn better when we ask our own questions rather than just accepting answers. Like Socrates taught, wisdom begins with questions that reveal what we don't know. In this exchange of ideas, the people who question most gain the deepest knowledge. 

To keep the free development of thought alive in the AI age, we must translate those timeless principles into practical safeguards. Courts have the power to limit government censorship, and constitutional protections in many democracies are necessary bulwarks to defend free expression. But these legal shields were built to check governments, not to oversee private AI systems that filter what information reaches us. 

Meta recently shared the weights—the raw numbers that make up the AI model—for Llama 3. This is a welcome move toward transparency, but Llama 3 still keeps plenty out of view. And even if those were public, the eye-watering amount of money spent on computation puts true replication out of reach for almost everyone. Moreover, many other leading AI systems remain completely closed, and their inner workings are still completely hidden from outside scrutiny. 

Open weights help, but transparency alone won't solve the problem. We also need open competition. Every AI system reflects choices about what data matters and what goals to pursue. If one model dominates, those choices set the limits of debate for everyone. We need the ability to compare models side by side, and users must be free to move their attention—and their data—between systems at will. When AI systems compete openly, we can compare them against each other in real time and more easily spot their mistakes. 

To truly protect free inquiry moving forward, the principles we value must be built into the technology itself. For this reason, our organizations—the Cosmos Institute and the Foundation for Individual Rights and Expression (FIRE)—are announcing $1 million in grants toward backing open-source AI projects that widen the marketplace of ideas and ensure the future of AI is free. 

Think of an AI challenger that pokes holes in your presuppositions and then coaches you forward; or an arena where open, swappable AI models debate in plain view under a live crowd; or a tamper-proof logbook that stamps every answer an AI model gives onto a public ledger, so nothing can be quietly erased and every change is visible to all. We want AI systems that help people discover, question, and debate more, not to stop thinking. 

For us as individuals, the most important step is the simplest: Keep asking questions. The pull to let AI become an "autocomplete for life" will feel irresistible. It's up to us to push back on systems that won't show their work and to seek out the unexpected, the overlooked, and the contrarian. 

A good AI should sharpen your thinking, not replace it. Your curiosity, not any algorithm, remains the most powerful force for truth.