Artificial Intelligence

The Authoritarian Side of Effective Altruism Comes for AI

Proposed bills reveal the extreme measures E.A.’s AI doomsayers support.

|

The effective altruism (E.A.) movement, which began with the premise that philanthropists should do the most good per dollar spent, injected pragmatism into an arena where good intentions can trump rational, effective number crunching. A generation of effective altruists—many schooled in Silicon Valley thought—have since embraced this metric-driven, impartial philosophy and translated their own good intentions into good works.

However, artificial intelligence (AI) exposes a flaw in the movement: a powerful faction of doomsayers. The result is not just misplaced philanthropy but lobbying to create agencies with utterly alarming authority.

For various reasons, the E.A. movement has turned its attention toward longtermism—a more radical form of its utilitarianism that weighs the value of each future potential life approximately the same as a living person's. Because any human extinction event, however unlikely, imposes infinite costs, longtermists can place enormous moral value on reducing whatever they view as existential risk.

Certain proponents of E.A. argue that intelligent-enough AIs pose such risk. Indeed, one of the most influential and longest-standing E.A. organizations, the Machine Intelligence Research Institute (MIRI), recently stated that its "objective is to convince major powers to shut down the development of frontier AI systems worldwide." MIRI's founder, Eliezer Yudkowsky, notoriously called on the U.S. to bomb "rogue" data centers and threaten nuclear war against countries that don't halt AI research.

Extremism is not unique to AI debates. Environmentalists have Just Stop Oil and their unadvisable displays, religions have violent extremists, and even Luddites have the Unabomber. But in E.A. the radicals are prominent. Sam Bankman-Fried claimed his cryptocurrency scams were his Machiavellian plots to supply tens of millions of dollars to E.A. organizations, including through his own Future Fund. 

Despite blots on their reputation, AI doomers have hundreds of millions of E.A. dollars in backing. And while extremists publish manifestos, they rarely propose legislation exposing just how far they're willing to go—until now.

Enter two proposed bills: the federal Responsible Advanced Artificial Intelligence Act (RAAIA) drafted by the Center for AI Policy, and California's Senate Bill 1047 sponsored by the Center for AI Safety (CAIS). Both bills and their backers are closely tied to E.A. and longtermist funding and organizations.

The RAAIA is, simply put, shocking in its authoritarianism. The bill would create a new federal agency (run by an administrator appointed by the president) to govern a wide range of AI systems, from weather forecasting to weapons. Companies must get permits before developing software, which the agency can arbitrarily condition. If a permitted model proves too competent, the agency can halt the research. Open-source projects must somehow verify and track the identities of all users and ensure each has a "legitimate, pro-social interest."

The emergency powers the RAAIA would grant to the president and administrator are dictatorial. The administrator can, on his own authority, shut down the entire frontier AI industry for six months. If the president declares an AI emergency, the administrator can seize and destroy hardware and software, enforced by guards "physically removing any unauthorized persons from specified facilities" and/or "taking full possession and control of specified locations or equipment." They can conscript the FBI and federal Marshals and direct other federal law enforcement officers. The administrator would have the prosecutorial and enforcement powers to subpoena witnesses, compel testimony, conduct raids, and demand any evidence deemed relevant, even for speculative "proactive" investigations.

Further, the RAAIA would create a registry for all high-performance AI hardware. If you "buy, sell, gift, receive, trade, or transport" even one covered microchip without the required form, you will have committed a crime. The bill imposes criminal liability for other violations, and agency employees can be criminally prosecuted for "willfully and intentionally" refusing to perform duties prescribed by the act. 

The bill also includes tricks attempting to insulate the administrator from influence by future administrations or other aspects of government. For example, there's a one-way ratchet clause empowering the administrator to update rules but making it difficult to "weaken or loosen" them. It attempts to constrain the judicial standard of review, compress appeal timeframes, and exempt the administrator from the Congressional Review Act, among other things.

Predictably, the agency is funded through its imposed fines and fees. This creates an incentive to levy them, limits congressional budgetary oversight, and demonstrates the backers' disdain for democratic checks and balances.

While the language in California's S.B. 1047 is milder, CAIS and state Rep. Scott Wiener (D–San Francisco) have written a state bill that could have a similarly authoritarian effect. 

S.B. 1047 would impose a new Frontier Model Division (FMD) to regulate organizations training AI models that require more than a certain threshold of computer power or expense—a threshold the FMD would set. Cloud computer providers would be required to implement a kill switch to shut down AI models if anything goes wrong, and additional emergency authorities would be given to the governor. 

But at its core, S.B. 1047 requires AI developers to prove a negative to a hostile regulator before proceeding. Specifically, developers of certain high-cost models must—somehow—prove ahead of time that their product could never be used to cause "critical damages."

Variations of the word reasonable appear over 30 times in S.B. 1047. Of course, the FMD determines how reasonable is defined. Other weasel words used include material, good faith, and reasonably foreseeable. Wiener and his co-authors have hidden their authoritarianism in this vague and arbitrary language.

If the FMD—likely staffed with E.A.-influenced AI doomers like those who wrote the bill—doesn't like an AI research proposal, it can impose custom conditions or block it entirely. Even if the FMD approves a plan, it can later determine that the plan was unreasonable and punish the company. All of this will inevitably deter the development of new models, which is perhaps the point.

The deceptively milder language of S.B. 1047 is partly why it has already passed the California state Senate and is moving through the House. For now, the RAAIA lacks congressional sponsorship. Yet both bills should merit alarm. They are products of a radical E.A. faction that, in its fervor to regulate away a perceived threat, is willing to blindly empower governments through unaccountable agencies, vague requirements, presumption of guilt, and unchecked emergency powers.