The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"AI, Society, and Democracy: Just Relax"
I wanted to specially note this Digitalist Papers essay by my Hoover colleague, economist (indeed, Grumpy Economist) John Cochrane; I'm somewhat more worried than he is, but I thought his perspective was interesting and worth noting. Here's the Conclusion:
As a concrete example of the kind of thinking I argue against, Daron Acemoglu writes,
We must remember that existing social and economic relations are exceedingly complex. When they are disrupted, all kinds of unforeseen consequences can follow…
We urgently need to pay greater attention to how the next wave of disruptive innovation could affect our social, democratic, and civic institutions. Getting the most out of creative destruction requires a proper balance between pro-innovation public policies and democratic input. If we leave it to tech entrepreneurs to safeguard our institutions, we risk more destruction than we bargained for….
The first paragraph is correct. But the logical implication is the converse—if relations are "complex" and consequences "unforeseen," the machinery of our political and regulatory state is incapable of doing anything about it. The second paragraph epitomizes the fuzzy thinking of passive voice. Who is this "we"? How much more "attention" can AI get than the mass of speculation in which we (this time I mean literally we) are engaged? Who does this "getting"?
Who is to determine "proper balance"? Balancing "pro-innovation public policies and democratic input" is Orwellianly autocratic. Our task was to save democracy, not to "balance" democracy against "public policies." Is not the effect of most "public policy" precisely to slow down innovation in order to preserve the status quo? "We" not "leav[ing] it to tech entrepreneurs" means a radical appropriation of property rights and rule of law.
What's the alternative? Of course AI is not perfectly safe. Of course it will lead to radical changes, most for the better but not all. Of course it will affect society and our political system, in complex, disruptive, and unforeseen ways. How will we adapt? How will we strengthen democracy, if we get around to wanting to strengthen democracy rather than the current project of tearing it apart?
The answer is straightforward: As we always have. Competition. The government must enforce rule of law, not the tyranny of the regulator. Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process. Remain a government of rights, not of permissions. Trust and strengthen our institutions, including all of civil society, media, and academia, not just federal regulatory agencies, to detect and remedy problems as they occur. Relax. It's going to be great.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
All the more impressive that it does so without using the passive voice at all!
Not all passive voice uses passive verbs. "We" need to do something is about as passive as you can get. As it says, who is "we" and what are "we" supposed to do?
What? Passive voice is by definition about the form of verbs. People often say that writers should not use passive voice, but the passive voice is fine when the actor is unknown or insignificant. Sometimes people employ the passive voice intentionally to obfuscate agency ("mistakes were made"), and we should criticize that kind of usage. Other times, the reading of a statement is made more difficult because the passive voice is used, and such s choice should also be criticized.
You are correct, sir!
All anti-AI screeds I have seen can't seem to avoid a basic conflict:
* AI is so advanced it will destroy society.
* Only government regulation can save us.
Government bureaucrats, known for their speed, skill, wisdom, objectivity, patience? Two possibilities:
* If government bureaucrats can rein in AI and save us all, then AI is too weak and slow to be a threat.
* AI has taken over the government and WE'RE ALL DOOOOMED! Run for the hills!
Whenever someone is talking about AI "safety" and such, it appears the "danger" they are most concerned about is something like Trump getting elected, or speech and ideas they don't like being expressed.
Bill Gates was just on the air again with his loud calls for the government to use AI to censor speech across the board. Speech that is skeptical of vaccines, hate speech, etc, really just any speech the regime doesn't like.
At the same time John Kerry opined at the WEF: "Our First Amendment stands as a major block to the ability to be able to hammer [disinformation] out of existence. What we need is to win...the right to govern by hopefully winning enough votes that you’re free to be able to implement change."
Bill Gates was a lousy programmer.
He thought the Internet was a passing fancy and Microsoft had to scramble to catch up.
Every time I've had to use Microsoft software, I found bugs within minutes, because their QA and QC apparently only check that the approved methods work, and if you want to make it work for you, it barfs.
He lied lied lied during the anti-trust trial, and while I wish the government had just butted out and left them to flounder on their own, it was fun watching Bill Gates lie so much.
His miserable track record does not impress. I think I'll choose to not believe him.
He didn't lie. He did, however, dissemble to the max.
I think he thought that was the game. Which means he either had terrible lawyers or thought he knew better than they did. I have a pretty good guess which.
Huh, Kerry continues, "Now, obviously there are some people in our country who are prepared to implement change in other ways."
https://x.com/SwipeWright/status/1840231811554664541
"Trust democracy, not paternalistic aristocracy—rule by independent, unaccountable, self-styled technocrats, insulated from the democratic political process."
I agree with this: Through the democratic process we can and should craft laws and regulations to help control AI's worse aspects, hopefully avoiding increased control of our lives by self-styled technocrats who run major companies. Better for politically accountable people to make line-drawing decisions.
What I don't get is why Cochrane wrote that sentence, because it's exactly the opposite of the rest of his essay's message.
I can log off Facebook or X any time I want, cancel netflix and prime, toss the alexa and the iphone. All whenever I want. Boo hoo. I'll grant that search is a little harder maybe, more monopolized, but still. Where's the control?
Now, let's say I would like to unsubscribe from this "government" thing - the level of service is lacking and quite frankly, the value just isn't there for the huge sums of money I pay for it. Oh, wait. I can't. There's guns and prisons and deadly force guaranteed to prohibit me from doing that, in fact the whole enterprise is based just on that one premise of violence actually.
Just as an example, Amazon and Wal-Mart have massive influence over the price and quality of consumer goods, and both are using AI in their procurement and pricing. We're soon going to see corporations using AI input for personnel decisions - they already do for some things such as staffing levels. Some of these functions should have public oversight through the mechanisms of law.
Your first comment was wildly deluded and off the mark. Even if your second comment here were compelling on its own (and it’s not), it simply doesn’t come close to supporting the first.
In general, we have more than enough regulations about the quality of consumer goods, and about hiring and personnel decisions. I don’t see why the incremental increase in use of computer algorithms by people in the course of making business decisions changes anything about the sufficiency and application of those laws and regulations.
The influence that Amazon and WalMart (the latter isn’t a tech company) have had over the price of consumer goods has been to bring the prices down drastically by increasing competition. As big as they are, they still have plenty of competition. They do not have the market power to fix prices. And if they did lack competition and had market power, that would be an antitrust issue and should be approached through that lens, not through a lens of “let’s have government input on the prices being fixed.”
Granted, there are a lot of extremely low quality imported consumer goods being sold on Amazon (as well as Temu, Wish and others). If you dislike that, you can easily not buy those products. But if you really wanted the government to do something about that I could see putting a tariff on imported consumer goods.
What I don’t get is why Cochrane wrote that sentence, because it’s exactly the opposite of the rest of his essay’s message.
Is this some form of humorless sarcasm? I assume not...
Obviously Cochrane is defining "paternalistic aristocracy" for you, as opposed to a "democracy." He's worried that AI fearmongering will join with all the other fearmongering that currently infuses our politics with anti-democratic urges and lead to an undemocratic America with an executive branch full of "independent, unaccountable, self-styled technocrats."
The "tech" in "technocrat" is unrelated to the "tech" in "tech company." A technocrat is a bureaucrat plus "expert" rolled into one, in other words a regulator who feels the need to exert their putative expertise onto the market. It's what the anti-Chevron crowd most fears.
The first paragraph is correct. But the logical implication is the converse—if relations are "complex" and consequences "unforeseen," the machinery of our political and regulatory state is incapable of doing anything about it.
That examples the kind of empty political rationalism conservative Michael Oakeshott warned against. It has been a distinguishing feature of most of the world's worst political disasters, including all the various flavors of totalitarianism.
Our political and regulatory state can continue, with a capacity to make policy, judge its effectiveness by experience, and accountably to correct whatever has not worked. That has been the actual method underlying modern political successes, such as policies to develop medical science, or to regulate the excesses of capital markets, or to develop public infrastructure, or to provide economic security for the aged and infirm.
The alternative insisted upon by the OP is reckless. It demands for AI a public policy pointedly designed to put beyond public accountablity the task to assess effects delivered by AI technology. It insists instead on more rationalism—this time free market rationalism—taken on faith.
The people doing this advocacy propose to work without erasers on their pencils. They plan to insist afterwards that whatever happens was by definition the best result that could have happened.
The one thing certain about that method is that costs it inflicts will not be paid by the parties who inflicted them. Neither they, nor anyone else, has capacity to say now how large those costs may prove to be, or to make any promise that benefits will even outweigh them. It is precisely because they do not know what to expect, that they now demand they not be held accountable.
https://jensorensen.com/2024/09/11/ai-model-collapse-mad-cow-cartoon/