The Authoritarian Side of Effective Altruism Comes for AI
Proposed bills reveal the extreme measures E.A.’s AI doomsayers support.

The effective altruism (E.A.) movement, which began with the premise that philanthropists should do the most good per dollar spent, injected pragmatism into an arena where good intentions can trump rational, effective number crunching. A generation of effective altruists—many schooled in Silicon Valley thought—have since embraced this metric-driven, impartial philosophy and translated their own good intentions into good works.
However, artificial intelligence (AI) exposes a flaw in the movement: a powerful faction of doomsayers. The result is not just misplaced philanthropy but lobbying to create agencies with utterly alarming authority.
For various reasons, the E.A. movement has turned its attention toward longtermism—a more radical form of its utilitarianism that weighs the value of each future potential life approximately the same as a living person's. Because any human extinction event, however unlikely, imposes infinite costs, longtermists can place enormous moral value on reducing whatever they view as existential risk.
Certain proponents of E.A. argue that intelligent-enough AIs pose such risk. Indeed, one of the most influential and longest-standing E.A. organizations, the Machine Intelligence Research Institute (MIRI), recently stated that its "objective is to convince major powers to shut down the development of frontier AI systems worldwide." MIRI's founder, Eliezer Yudkowsky, notoriously called on the U.S. to bomb "rogue" data centers and threaten nuclear war against countries that don't halt AI research.
Extremism is not unique to AI debates. Environmentalists have Just Stop Oil and their unadvisable displays, religions have violent extremists, and even Luddites have the Unabomber. But in E.A. the radicals are prominent. Sam Bankman-Fried claimed his cryptocurrency scams were his Machiavellian plots to supply tens of millions of dollars to E.A. organizations, including through his own Future Fund.
Despite blots on their reputation, AI doomers have hundreds of millions of E.A. dollars in backing. And while extremists publish manifestos, they rarely propose legislation exposing just how far they're willing to go—until now.
Enter two proposed bills: the federal Responsible Advanced Artificial Intelligence Act (RAAIA) drafted by the Center for AI Policy, and California's Senate Bill 1047 sponsored by the Center for AI Safety (CAIS). Both bills and their backers are closely tied to E.A. and longtermist funding and organizations.
The RAAIA is, simply put, shocking in its authoritarianism. The bill would create a new federal agency (run by an administrator appointed by the president) to govern a wide range of AI systems, from weather forecasting to weapons. Companies must get permits before developing software, which the agency can arbitrarily condition. If a permitted model proves too competent, the agency can halt the research. Open-source projects must somehow verify and track the identities of all users and ensure each has a "legitimate, pro-social interest."
The emergency powers the RAAIA would grant to the president and administrator are dictatorial. The administrator can, on his own authority, shut down the entire frontier AI industry for six months. If the president declares an AI emergency, the administrator can seize and destroy hardware and software, enforced by guards "physically removing any unauthorized persons from specified facilities" and/or "taking full possession and control of specified locations or equipment." They can conscript the FBI and federal Marshals and direct other federal law enforcement officers. The administrator would have the prosecutorial and enforcement powers to subpoena witnesses, compel testimony, conduct raids, and demand any evidence deemed relevant, even for speculative "proactive" investigations.
Further, the RAAIA would create a registry for all high-performance AI hardware. If you "buy, sell, gift, receive, trade, or transport" even one covered microchip without the required form, you will have committed a crime. The bill imposes criminal liability for other violations, and agency employees can be criminally prosecuted for "willfully and intentionally" refusing to perform duties prescribed by the act.
The bill also includes tricks attempting to insulate the administrator from influence by future administrations or other aspects of government. For example, there's a one-way ratchet clause empowering the administrator to update rules but making it difficult to "weaken or loosen'' them. It attempts to constrain the judicial standard of review, compress appeal timeframes, and exempt the administrator from the Congressional Review Act, among other things.
Predictably, the agency is funded through its imposed fines and fees. This creates an incentive to levy them, limits congressional budgetary oversight, and demonstrates the backers' disdain for democratic checks and balances.
While the language in California's S.B. 1047 is milder, CAIS and state Rep. Scott Wiener (D–San Francisco) have written a state bill that could have a similarly authoritarian effect.
S.B. 1047 would impose a new Frontier Model Division (FMD) to regulate organizations training AI models that require more than a certain threshold of computer power or expense—a threshold the FMD would set. Cloud computer providers would be required to implement a kill switch to shut down AI models if anything goes wrong, and additional emergency authorities would be given to the governor.
But at its core, S.B. 1047 requires AI developers to prove a negative to a hostile regulator before proceeding. Specifically, developers of certain high-cost models must—somehow—prove ahead of time that their product could never be used to cause "critical damages."
Variations of the word reasonable appear over 30 times in S.B. 1047. Of course, the FMD determines how reasonable is defined. Other weasel words used include material, good faith, and reasonably foreseeable. Wiener and his co-authors have hidden their authoritarianism in this vague and arbitrary language.
If the FMD—likely staffed with E.A.-influenced AI doomers like those who wrote the bill—doesn't like an AI research proposal, it can impose custom conditions or block it entirely. Even if the FMD approves a plan, it can later determine that the plan was unreasonable and punish the company. All of this will inevitably deter the development of new models, which is perhaps the point.
The deceptively milder language of S.B. 1047 is partly why it has already passed the California state Senate and is moving through the House. For now, the RAAIA lacks congressional sponsorship. Yet both bills should merit alarm. They are products of a radical E.A. faction that, in its fervor to regulate away a perceived threat, is willing to blindly empower governments through unaccountable agencies, vague requirements, presumption of guilt, and unchecked emergency powers.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
They are products of a radical E.A. faction that, in its fervor to regulate away a perceived threat, is willing to blindly empower governments through unaccountable agencies, vague requirements, presumption of guilt, and unchecked emergency powers.
Sounds like M4e
And Jeffy.
Can you imagine your neighbor having a pig? MADNESS!!!
Gosh this sounds scary!
Oh. It's the same powers almost all federal bureaucracies have.
Let me know when you started getting so worked up and freaked out over Wickard v. Filburn, the Food and Drug Act, the DEA, the EPA, and every other regulatory agency.
Meanwhile, keep your PTSD in your pants.
That's what I was thinking as I read—sounds like SOP for the Regulatory State. Sounds like Chilson has never experienced what it feels like to have a regulatory boot up his ass.
“Sounds like Chilson has never experienced what it feels like to have a regulatory boot up his ass.”
“NEIL CHILSON is the former chief technologist for the Federal Trade Commission”
Sounds like Chilson is used to experiencing being the boot.
June jobs +206,000
New market highs
US economy booming, Peanuts.
Unemployment rate up.
May jobs, April jobs, March jobs, February jobs, January jobs ... no doubt all revised downwards.
Meanwhile, a few years back you posted kiddy porn to this site, and your initial handle was banned. The link below details all the evidence surrounding that ban. A decent person would honor that ban and stay away from Reason. Instead you keep showing up, acting as if all people should just be ok with a kiddy-porn-posting asshole hanging around. Since I cannot get you to stay away, the only thing I can do is post this boilerplate.
https://reason.com/2022/08/06/biden-comforts-the-comfortable/?comments=true#comment-9635836
Yep, that really happened.
Why did unemployment tick up shrike? What will be the downward revisions next month lol.
With these revisions, employment in April and May combined is 111,000 lower than previously reported. So yes, it is easy to “beat” when you have a pool of 111K jobs that never existed to push into this month.
https://www.zerohedge.com/markets/payrolls-rise-206k-after-huge-downward-revisions-unemployment-rate-jumps-three-year-high
So they added back the non existent jobs they removed the last two months. Lol.
private sector workers came in at 136K, well below the 160K expected and down from a downward revised 193K (was 229K). The gap was filled by - what else- deep stater and other government workers, as government payrolls jumped from 25K to 70K!
Private jobs under expectations, but government is hiring! Lol.
God damn shrike. Youre an embarrassment.
turd, the ass-clown of the commentariat, lies; it’s all he ever does. turd is a kiddie diddler, and a pathological liar, entirely too stupid to remember which lies he posted even minutes ago, and also too stupid to understand we all know he’s a liar.
If anything he posts isn’t a lie, it’s totally accidental.
turd lies; it’s what he does. turd is a lying pile of lefty shit.
"Jan 6 = 9/11 (same motive)"
turd certainly is dishonest, but he’s got a heaping helping of stupid to go with his dishonesty. Stupid, lying, despicable steaming pile of lefty shit and proud of it!
Aw, don’t listen to them, buttplug. I’m with ya on this one. I mean, if things are this good now, I share your enthusiasm for how good it can be when dementia joe is sharing a jail cell with his son and we get a tax and regulation slashing businessman back in charge. Forward thinking is key.
Gotta say, I never thought you’d come around on this, but credit where due.
#BuyBiglyNov6
weighs the value of each future potential life approximately the same as a living person's
Or simply use the value of each past actual life.
Yudkowsky ... called on the U.S. to ... threaten nuclear war against countries that don't halt AI research.
And cyber attacks against countries that don't halt nuclear weapons research, right? RIGHT?!
What we really need is a ban on stupidity, artificial or not.
How about ... every election includes a question concerning the incumbent, regardless of whether he's running for re-election or not.
100% means the normal pension contribution for his last term. 50% means no addition to his pension. Anything less reaches back and undoes the previous contributions, with 0% zeroing out his pension altogether.
By the time careerist office holders near retirement, they've enriched themselves enough through graft and self-dealing that they don't much care about their pensions. The pension is just sofa cushion change to them at that point.
I know, but it's also a scorecard from voters, and would provide some kind of record they can't ignore as fake news.
We used to have a ban on stupidity. It was called natural selection.
Great article, certainly worth reading. I personally feel the Gov in their typical, ham fisted manner, are trying to control AI development to serve their own purposes, under the guise of protecting us all. As always, they’re full of shit.
Gov legislatures wanting AI “kill switches” shows our betters’ ignorance knows no bounds. However, I suspect MIRI probably has some good arguments, regarding AI development, that are more intelligent than nuking countries that aren’t following development rules, as the author critiqued.
Fuck everyone who has a vision for how other people should live. And double fuck everyone who thinks having a superior vision gives them a mandate to control others.
You could have stopped at "fuck everyone".
++
For various reasons, the E.A. movement has turned its attention toward longtermism—a more radical form of its utilitarianism that weighs the value of each future potential life approximately the same as a living person’s. Because any human extinction event, however unlikely, imposes infinite costs, longtermists can place enormous moral value on reducing whatever they view as existential risk.
So, evil then. Weird though since I'm pretty sure that group is also pretty on board with abortion which is...uhh...interesting given that overview statement.
The existential AI threat was once that it would seize control of the nuclear stockpile and annihilate humanity; now it is that it might accidentally blurt out something "offensive" and so, with this lowered bar, can be used to justify intrusive monitoring and regulation.
So these people believe that AI is such an "existential threat" that they are willing to unleash the true existential threat - nuclear war - to stop it? That is utterly pathetic. Regardless of your opinions about regulation, that is just pathetic, and frightening.
AI in itself presents no threat at all. It is a tool that can be used for various purposes. And since it is, ultimately, just software, it should not in itself be subject to regulation at all. But what it can be used for can and should be. Prohibiting AI systems from directly controlling deadly weapons makes sense. Prohibiting the systems themselves doesn't.
MIRI’s founder, Eliezer Yudkowsky, notoriously called on the U.S. to bomb “rogue” data centers and threaten nuclear war against countries that don’t halt AI research.
Seems to me that E.A. is a LOT more dangerous than A.I. The most dangerous thing about A.I. is that it could inspire a freakout by E.A. dudes who would then start a nuclear war.
Here’s a quiz question for ya: What cause has been responsible for the most evil ever perpetrated, the most mass murder, the most suffering?
Answer: The GOOD.
More people have been murdered in the name of the good than any other cause on earth. Think the Great Leap Forward, the Holodomor (Ukraine famine), the Cultural Revolution, the Holocaust, etc., etc. The people who perpetrated all of that thought they were doing good by some standard of “good”, mistaken or not. But their intention was to do "good".
Also, for all the freakoutedness about AI, I have not heard or seen any actual scenario put forth in any detail by the freaked out that would constitute any threat. The freaked out EA people seem to be all paranoia but no facts or details.
The ONLY threat posed by current AI is that some people don't understand its limitations and take words and images that were manipulated without understanding as something that can be relied on. E.g., the fools who went to ChatGPT to write a legal brief and didn't check the citations - which were fictional. Or the self-driving car developers that thought their car was hallucinating something on the road and stopping to ask the passenger to drive too often, so they made it drive through an unidentified image - it was trained to recognize a pedestrian and a bicycle separately, but it could not identify a pedestrian walking a bicycle, so it drove over her.
In 50 years, it might be different - but that's what I thought 50 years ago! I remember ELIZA, software that would simulate Rogerian psychotherapy by grammatically manipulate what you typed in and reflecting it back to you. A modern cell-phone has several orders of magnitude more computing power than the machines that it ran on. I fed it texts designed to explore how it made those transformations, then to cause hilariously mixed-up responses. My wife talked to it for hours, the way she'd have talked to her mother if we could afford $1 a minute long-distance bills. (In 1980 dollars!)
ChatGPT is ELIZA with a much larger database and the computing power to search through it thousands of times a second.
The effective altruism
Wait, stop. That's a contradiction. An oxymoron.
There is no "effective altruism." Altruism is "the sacrificed" and "the beneficiaries."