Artificial Intelligence

OpenAI Chief Sam Altman Wants an FDA-Style Agency for Artificial Intelligence

His licensing proposal would slow down A.I. innovation without really reducing A.I. risks.

|


The creation of a new Artificial Intelligence Regulatory Agency was widely endorsed today during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on Oversight of A.I.: Rule for Artificial Intelligence. Senators and witnesses cited the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC) as models for how the new A.I. agency might operate. This is a terrible idea.

The witnesses at the hearing were OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and A.I. researcher-turned-critic Gary Marcus. In response to one senator's suggestion that the NRC might serve as a model for A.I. regulation, Altman naively agreed that the "NRC is a great analogy" for the type of A.I. regulation he favors. Marcus argued that A.I. should be licensed in much the same way that the FDA approves new drugs. Those are great models if your goal is to stymie progress or kill off new technologies.

The NRC has basically regulated the nuclear power industry to near death, and it takes 12 to 15 years for a new drug to get from the lab bench to a patient's bedside, thanks to the FDA. Unintended consequences of NRC overregulation include more deaths from pollution and accidents and the greater emission of greenhouse gases than might otherwise have been the case. Delayed drug approvals by the FDA result in higher mortality than speedily approving drugs that later need to be withdrawn.

A more circumspect Montgomery noted that current law covers many areas of concern with respect to the safety and misuse of new A.I. technologies. She specifically noted that companies using A.I. are not off the hook for exercising a duty of care—that is, using reasonable care to avoid causing injury to other people or their property. For example, companies are liable for discrimination in hiring or loan approval whether those decisions are made by an algorithm or a human being. If medical A.I. gave bum treatment advice, the companies that built it could be sued for malpractice.

Committee Chairman Richard Blumenthal (D–Conn.) expressed his concerns about industry concentration, fearing that just a few big incumbent companies would end up developing and controlling A.I. technologies. In fact, Altman noted that very few companies would have the resources to develop and train generative A.I. models like OpenAI's GPT-4 and its successors. He actually said that this could be a regulatory advantage, since the new agency would have to focus its attention on just a few companies. On the other hand, Marcus noted the danger of regulatory capture by a few big companies that could afford to comply with the thickets of new regulations, thus shielding themselves from competition from smaller startups.

A new A.I. agency that takes after the NRC, the FDA, and their overregulation would likely deny us access to the substantial benefits of the technology while providing precious little extra safety.