How To Restrain the A.I. Regulators
A more flexible model of oversight avoids hyper-cautious top-down regulation and enables swifter access to the substantial benefits of safe A.I.

While some A.I. alarmists are arguing that the further development of generative artificial intelligence like OpenAI's GPT-4 large language model should be "paused," licensing proposals suggested by some boosters like OpenAI CEO Sam Altman and Microsoft President Brad Smith ($10 billion invested in OpenAI) may inadvertently accomplish much the same goal.
Altman, in his prepared testimony before a senate hearing on A.I. two weeks ago, suggested "the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements."
While visiting lawmakers last week in Washington, D.C., Smith concurred with the idea of government A.I. licensing. "We will support government efforts to ensure the effective enforcement of a licensing regime for highly capable AI models by also imposing licensing requirements on the operators of AI datacenters that are used for the testing or deployment of these models," states his company's recent report Governing AI: A Blueprint for the Future.
So what kind of licensing regime do Altman and Smith have in mind? At the Senate hearing, Altman said that the "NRC is a great analogy" for the type of A.I. regulation he favors, referring to the Nuclear Regulatory Commission. Others at the hearing suggested the way the Food and Drug Administration licenses new drugs might be used to approve the premarket release of new A.I. services. The way that NRC licenses nuclear power plants may be an apt comparison, given that Smith wants the federal government to license gigantic datacenters like the one Microsoft built in Iowa to support the training of OpenAI's generative A.I. models.
What Altman, Smith, and other A.I. licensing proponents fail to recognize is that both the NRC and FDA have evolved into highly precautionary bureaucracies. Consequently, they employ procedures that greatly increase costs and slow consumer and business access to the benefits of the technologies they oversee. A new federal Artificial Intelligence Regulatory Agency would do the same to A.I.
Why highly precautionary? Consider the incentive structure faced by FDA bureaucrats: If they approve a drug that later ends up harming people they get condemned by the press, activists, and Congress, and maybe even fired. On the other hand, if they delay a drug that would have cured patients had it been approved sooner, no one blames them for the unknown lives lost.
Similarly, if an accident occurs at a nuclear power plant authorized by NRC bureaucrats, they are denounced. However, power plants that never get approved can never cause accidents for which bureaucrats could be rebuked. The regulators credo is better safe than sorry, ignoring that it is often the case that he who hesitates is lost. The consequences of such overcautious regulation is technological stagnation, worse health, and less prosperity.
Like nearly all technologies, A.I. is a dual use technology offering tremendous benefits when properly applied and substantial dangers when misused. Doubtlessly, generative A.I. such as ChatGPT and GPT-4 has the potential to cause harm. Fraudsters could use it to generate more persuasive phishing emails, massive trolling of individuals and companies, and lots of fake news. In addition, bad actors using generative A.I. could mass produce mis- dis- and mal-information campaigns. And of course, governments must be prohibited from using A.I. to implement pervasive real-time surveillance and/or deploy oppressive social scoring control schemes.
On the other hand, the upsides of generative A.I. are vast. The technology is set to revolutionize education, medical care, pharmaceuticals, music, genetics, material science, art, entertainment, dating, coding, translation, farming, retailing, fashion, and cybersecurity. Applied intelligence will enhance any productive and creative activity.
But let's assume federal regulation of new generative artificial intelligence tools like GPT-4 is unfortunately inevitable. What sort of regulatory scheme would be more likely to minimize delays in the further development and deployment of beneficial A.I. technologies?
R Street Institute senior fellow Adam Thierer in his new report recommends a "soft law" approach to overseeing A.I. developments instead of imposing a one-size-fits-all, top-down regulatory scheme modeled on the NRC and FDA. Soft law governance embraces a continuum of mechanisms including multi-stakeholder conclaves where governance guidelines can be hammered out; government agency guidance documents, voluntary codes of professional conduct, insurance markets, and third-party accreditation and standards-setting bodies.
Both Microsoft and Thierer point to the National Institute of Standards and Technology's (NIST) recently released Artificial Intelligence Risk Management Framework as an example of how voluntary good A.I. governance can be developed. In fact, Microsoft's new A.I. Blueprint report acknowledges that NIST's "new AI Risk Management Framework provides a strong foundation that companies and governments alike can immediately put into action to ensure the safer use of artificial intelligence."
In addition, the Department of Commerce's National Telecommunications and Information Administration (NTIA) issued in April a formal request for comments from the public on artificial intelligence system accountability measures and policies. "This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy," notes the agency. The NTIA plans to issue a report on A.I. accountability policy based on the comments it receives.
"Instead of trying to create an expensive and cumbersome new regulatory bureaucracy for AI, the easier approach is to have the NTIA and NIST form a standing committee that brings parties together as needed," argues Thierer. "These efforts will be informed by the extensive work already done by professional associations, academics, activists and other stakeholders."
A model for such a standing committee to guide and oversee the flexible implementation of safe A.I. would be the National Science Advisory Board for Biosecurity (NSABB). The NSABB is federal advisory committee composed of 25 voting subject-matter experts drawn from a wide variety fields related to the biosciences. The NSABB provides advice, guidance, and recommendations regarding biosecurity oversight of dual use biological research. A National Science Advisory Board for A.I. Security could similarly consist of a commission of experts drawn from relevant computer science and cybersecurity fields to analyze, offer guidance, and make recommendations with respect to enhancing A.I. safety and trustworthiness. This more flexible model of oversight avoids the pitfalls of top-down hypercautious regulation while enabling swifter access to the substantial benefits of safe A.I.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
How To Restrain the A.I. Regulators
Barbed wire?
Skynet has it handled.
I am making a good salary from home $1500-$2500/week , which is amazing, undera year earlier I was jobless in a horrible economy. I offer thanks toward Godeach day I was blessed with these instructions and now it’s my duty to pay itforward and share it with Everyone, Here is website where i startedthis……………..
.
.
EARN THIS LINK—————————————➤ https://Www.Coins71.Com
There have been many examples in history of technology that should have been regulated at the start, but after that it is very hard to undo the damage. This list includes toxic chemicals dumped into the environment, carbon emissions, risky and predatory financial products, tax shelters, and patent medicines.
We, as a society, know how dangerous AI can be, we have seen (so far) mild examples of this (bias in hiring, chat bots becoming very hateful). Experts are telling us that this is a technology that needs to be regulated. We need to regulate AI correctly now, while we have the chance.
Tax shelters were regulated from the start, by definition.
No income was taxed when the country started, and not for 124 years after that. If the income taxers left a bit of your own money to you, it's not a "loop-hole," it's a "remaining freedom."
Where would we be without the guidance and wisdom from the likes of joe Biden and Donald Trump
*cue harp music and watery screen-wipe to a future with a grey-haired Nick Gillespie sitting in his 100 sq ft upzoned retirement sleep pod*
The Artificial Intelligence Decency Act is the first amendment of AI research!
*Nurse knocks on the door*
Nurse: Mr Gillespie, it’s time for your 12pm COVID booster!
Gillespie: Ain’t freedom awesome? *submits article to ChatGPT 12.5 for final approval*
Huh, upon further reflection, my comment doesn't capture it quite enough.
Editors, replace:
Nurse: Mr Gillespie, it’s time for your 12pm COVID booster!
With:
Robot Nurse: Patient N-G-48786, it's time for your 12pm COVID booster, you have 30 seconds to comply! *Second electronic voice, different from first follows*
"Terms of service clearly state that this booster is not mandatory, but if you refuse, you will be ejected from the premises. Remember, just because a choice is difficult doesn't make it any less a choice. Other disclaimers may apply and are subject to change at any time, even retroactively."
Also the totes optional booster will be rectal administered. And the robot nurse will be very colorful.
How about we gather all the Marxist in silicone valley and kill them. That would put a big dent in the evil coming from ai
Journalism really is a cesspool.
AI should be heavily regulated, but I can think of some good purposes for it, as long as the proper guard rails are set up. For example: AI could help fortify our elections to make sure the person who’s supposed to win really won.
Yep, I've seen this push in the MSM lately. Figured Big Guv and Big Tech wants to monopolize the Power and Money to be had.
I have made $18625 last month by w0rking 0nline from home in my part time only. Everybody can now get this j0b and start making dollars 0nline just by follow details here..
🙂 AND GOOD LUCK.:)
HERE====)> https://www.apprichs.com
Patently ridiculous. Regulating what kinds of software can be developed is ridiculous. It makes no sense and has no effect. You may as well regulate the wind and the tides. The software will be developed. If it is run from within the regulated territory its capabilities will simply not be publicly discussed. If it is run from outside the regulated territory then its effects will be as potent here as if it were, save for a few ms of latency, thanks to the magic of the internet. This is going to end exactly the same way the encryption wars of the 90s did. The ONLY thing to fear about AI regulation is that the 4th amendment will be entirely shredded in service of making sure that people aren't hiding an unregulated AI somewhere.
Regulation of AI won't matter:
https://www.scientificamerican.com/article/heres-why-ai-may-be-extremely-dangerous-whether-its-conscious-or-not/
Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second. Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace. Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.
Scientific American has Scien[ce] in name of the magazine. I can't think of a more mic-drop-ey concept.
All fine to talk about, but how about one simple rule:
Every computer must be built with a simple power switch (as opposed to a relay, or worse, a shutdown routine). At least if one of these things does get out of hand, we need to be able to shut it down, rather than expecting it to be willing to commit suicide just because we ask it to.
When an industry supports regulating itself most of the time it is to stifle competition.
When people talk of AI and regulating it, it makes me think of Neuromancer by William Gibson. In there, you have the Turing Police investigating out of control AIs. There is a big difference between something that seems intelligent (ChatGPT) and something that is (Wintermute). When I can't break ChatGPT in one minute then it will be time to worry.
A well-informed commentary on the "state of the debate", but lacking in insight or application of libertarian principles.
1. "Better safe than sorry" is a pedantic version of the "Precautionary Principle" used by every regulatory bureaucracy: Every potential risk must preclude the private (though not governmental) use of anything. If there's *any* hazard, the bureaucrat is desperate to ban *everyone* from using the product or service, no matter how beneficial it might be.
2. No informed person has every stipulated what hazards there might be to using a "ChatBot" that doesn't already exist in normal methods of communicating facts and fallacies. For decades, hackers have been buying vast bundles of IP addresses to distribute hype and lies to internet resources that consider each IP address a separate user or independent publisher.
3. There is no "artificial intelligence" with independent existence or free will. Everything a "ChatBot" does or says has been programmed by people or crafted into existing sources (like Wikipedia) over the course of decades. There is nothing novel in "Artificial Stupidity", practiced with impunity by every politician since King David.
4. The claims of "social disasters" from AI (or AS) are ridiculous. The sole purpose of demanding government intervention is to eliminate competition. The novelty in the latest demands are that they are aimed at destroying the existing market leaders (in this case, Microsoft) from being successful.
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM