The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

AI and Constitutions, from My Hoover Institution Colleague Andy Hall

|

A very interesting post on his Free Systems substack; I'm not sure what to think of the subject, but it struck me as much worth passing along. An excerpt:

I'm a political economy professor who studies constitutional design: how societies create structures that constrain their most powerful actors, and what happens when those structures fail. I've also spent years working on how to build democratic accountability into technological systems—at Meta, where I've helped to design both crowdsourced and expert-driven oversight for content moderation affecting billions, and in crypto, where I've studied how decentralized protocols can create constraints that bind even founders.

AI leaders have long been worried about the same problem: constraining their own power. It animated Elon Musk's midnight emails to Sam Altman in 2016. It dominated Greg Brockman's and Ilya Sutskever's 2017 memo to Musk, where they urged against a structure for OpenAI that would allow Musk to "become a dictator if you chose to."

Fast forward to 2026 and AI's capabilities are reaching an astonishing inflection point, with the industry now invoking the language of constitutions in a much more urgent and public way. "Humanity is about to be handed almost unimaginable power," Dario Amodei wrote this week, "and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."

Ideas on how to deal with this concentration of power have often seemed uninspired—a global pause in AI development the industry knows will never happen, a lawsuit to clip at the heels of OpenAI for its changing governance structure.

Claude's revised constitution, published last week, offers perhaps our most robust insight into how a major tech company is wrestling with the prospect of effectively steering its wildly superhuman systems. What to make of it?

It's thoughtful, philosophically sophisticated, and … it's not a constitution. Anthropic writes it, interprets it, enforces it, and can rewrite it tomorrow. There is no separation of powers, no external enforcement, no mechanism by which anyone could check Anthropic if Anthropic defected from its stated principles. It is enlightened absolutism, written down.

AI leaders are in a tricky position here. We are in genuinely uncharted territory and Amodei and team deserve great credit for doing some of this thinking in public.

Could highly advanced AI create a new kind of all-powerful dictatorship? What would this look like, and how can we stop it? These are perhaps the most important questions in AI governance. Yet the conversation so far has been conducted almost entirely by technologists and philosophers. The problem of constraining power is ancient. Political economists from Polybius to Madison have spent millennia studying how societies shackle their despots.

If Brockman and Sutskever were right in 2017 that we should "create some other structure," then nine years later, we should ask: what would that structure actually look like? The political economics of constitutional design—from Polybius, to Madison, to the modern research of North and Weingast, or Acemoglu and Robinson—offers the right tools for this problem. It's time we used them.

What does an AI dictatorship look like?

Part of the problem is that "AI dictatorship" can mean at least three different things:

The company becomes the dictator. One company achieves such dominance through AI capabilities that it becomes a de facto sovereign—too powerful to regulate, compete with, or resist. This is what Sutskever and Brockman were worried about in that 2017 email. If Musk controlled the company that controlled AGI, he could become a dictator "if he chose to."

The government becomes the dictator. A state controls the all-powerful model and uses it to surveil, predict, and control its population so effectively that political opposition becomes impossible. The AI enables dictatorship; it doesn't replace the dictator. This is the fear behind most discussions of AI and authoritarianism, laid out provocatively in the AI2027 scenario written by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean.

The AI becomes the dictator. The AI itself has goals, pursues them, and humans can't stop it. It isn't a tool of human dictators—it is the dictator. This is the classic "misalignment" scenario that dominates AI safety discourse, it's what Amanda Askell's 'soul doc' and subsequent Claude constitution are driving towards.

These are different threats. And conflating them makes it nearly impossible to think clearly about what kinds of governance would actually help.

But all three do share something: they are problems of unchecked power. And the question of how to check power is not new. Political economists from Plato and Aristotle to Locke and Madison and beyond have been working on it for millennia.

What political economy teaches us about constraining power

Read the rest here.