OpenAI Chief Sam Altman Wants an FDA-Style Agency for Artificial Intelligence
His licensing proposal would slow down A.I. innovation without really reducing A.I. risks.

The creation of a new Artificial Intelligence Regulatory Agency was widely endorsed today during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on Oversight of A.I.: Rule for Artificial Intelligence. Senators and witnesses cited the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC) as models for how the new A.I. agency might operate. This is a terrible idea.
The witnesses at the hearing were OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and A.I. researcher-turned-critic Gary Marcus. In response to one senator's suggestion that the NRC might serve as a model for A.I. regulation, Altman naively agreed that the "NRC is a great analogy" for the type of A.I. regulation he favors. Marcus argued that A.I. should be licensed in much the same way that the FDA approves new drugs. Those are great models if your goal is to stymie progress or kill off new technologies.
The NRC has basically regulated the nuclear power industry to near death, and it takes 12 to 15 years for a new drug to get from the lab bench to a patient's bedside, thanks to the FDA. Unintended consequences of NRC overregulation include more deaths from pollution and accidents and the greater emission of greenhouse gases than might otherwise have been the case. Delayed drug approvals by the FDA result in higher mortality than speedily approving drugs that later need to be withdrawn.
A more circumspect Montgomery noted that current law covers many areas of concern with respect to the safety and misuse of new A.I. technologies. She specifically noted that companies using A.I. are not off the hook for exercising a duty of care—that is, using reasonable care to avoid causing injury to other people or their property. For example, companies are liable for discrimination in hiring or loan approval whether those decisions are made by an algorithm or a human being. If medical A.I. gave bum treatment advice, the companies that built it could be sued for malpractice.
Committee Chairman Richard Blumenthal (D–Conn.) expressed his concerns about industry concentration, fearing that just a few big incumbent companies would end up developing and controlling A.I. technologies. In fact, Altman noted that very few companies would have the resources to develop and train generative A.I. models like OpenAI's GPT-4 and its successors. He actually said that this could be a regulatory advantage, since the new agency would have to focus its attention on just a few companies. On the other hand, Marcus noted the danger of regulatory capture by a few big companies that could afford to comply with the thickets of new regulations, thus shielding themselves from competition from smaller startups.
A new A.I. agency that takes after the NRC, the FDA, and their overregulation would likely deny us access to the substantial benefits of the technology while providing precious little extra safety.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Anyone notice the mysterious removal of Sullum's Ithaka post?
Coincidence? I think not.
lol, 420 bruh, get it? 420?
Yep, noticed it too. I wonder wonder who! if it reappears, will the comments remain?
It was from the print mag, so maybe it got released early.
Coincidence? I think not.
Coincidence of what and what? Please expound on whatever you are insinuating.
You know, if you don’t know something it’s possible to stay silent.
Then how'd he earn is fifty cents?
Start making cash right now... Get more time with your family by doing jobs that only require for you to have a computer and an internet access and you can have that at your home. Start bringing up to $8012 a month. I've started this job and I've never been happier and now I am sharing it with you, so you can try it too.
HERE====)>>> https://www.apprichs.com
Republicans who are on board with this go directly into the you-haven't-learned-a-fucking-thing column.
They can save seats for the civil libertarians who will show up fashionably late.
The witnesses at the hearing were OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and A.I. researcher-turned-critic Gary Marcus. In response to one senator's suggestion that the NRC might serve as a model for A.I. regulation, Altman naively agreed that the "NRC is a great analogy" for the type of A.I. regulation he favors.
And by the way, when this ChatGPT thing first took the journolisming field by storm, I did some perusal of the twit feeds of people like Altman and I will proudly say that I was early in declaring that these people are NOT (l)ibertarianisms friend.
Congratulations on a great achievement in discernment.
The same discernment recognized you as well.
To be fair, the fda has managed to keep illegal drugs out of public hands and ensured that they haven’t been used illegitimately. They also haven’t licensed unsafe substances or circumvented their own protocols when politically expedient.
I say we go for it!
That's the funniest thing I've ever heard.
What's more likely?
AI destroying human civilization
Government destroying civilization
I think (2) has long since happened, and if government were to leave everybody the fuck alone, the freedom would be breath-taking and (most) people would wonder how the hell we put up with it for so long.
But then who would be there to protect people from mean tweets and words that are literally violence?
Megyn Kelly talks to former and current Reason writers on the Trump/CNN town hall debacle.
CNN staffer calls Trump Town Hall "Our Chernobyl".
This is an article:
So, translates as "did precisely fuck all for two years while collecting a paycheck as the problem got steadily worse"?
So much money to make if he is first into the revolving door or government regulation and industry.
Exactly. It's a cush gig if you get in on the ground floor.
Gotta' get a jump on regulatory capture, too!
That is why the incumbent big tech companies sometimes want regulations. Those pesky startups don’t have the money to comply. Bye bye competition.
So, firstly, the original existential fear about AI was that it would seize control of the nuclear weapon stockpile and subjugate humanity. Now, it is that a chat bot might string together a series of words that hurts someone's feelings.
Secondly, when a member of an industry clamors for those idiots in Washington to regulate them, one should ask why. He wants to cement their lead and stifle open source AI and competition.
Regulatory capture is a two-way street.
Hey, I don't like what this Mark Zuckerberg and Friends are doing!
(l)ibtertarianism: Leave Mark Zuckerberg alone. Let him do whatever he wants! Quit stifling innovationing!
*goes to Facebook website* Huh.
Example of a libertarian saying Zuckerberg (or Facebook) must be left alone because he is innovating?
I’ve seen libertarians say Facebook should be allowed to do whatever moderation they want because they are the owners. But I’ve also seen Reason writers make fun of Zuckerberg’s Metaverse idea.
Your examples don't talk to his point. Did you read his entire post? Making fun vs recognizing collusion with government is not the same act.
Well trump already sold the nuclear launch codes from mar a Lago. So that fear is realized.
This is ChatGPT 4's argument against regulation:
AI research, like any scientific field, has both proponents and opponents when it comes to regulation. Here are some reasons often cited by those who argue against strict regulation of AI research:
1. Innovation and Progress: Over-regulation can stifle innovation and slow down progress. As AI is still an emerging field, it's essential for researchers to have the freedom to explore and experiment. Too much regulation could limit these opportunities and potentially hinder important advances in technology.
2. Competitive Disadvantage: If one country imposes strict regulations on AI research, it could place that country at a disadvantage compared to others with fewer restrictions. This could lead to a brain drain, with top researchers moving to places where they have more freedom to conduct their research.
3. Unintended Consequences: Regulations can sometimes have unintended consequences. For example, they might discourage the sharing of research findings, leading to a lack of transparency in the field. This could in turn lead to a situation where only a few powerful entities have control over AI technology.
4. Difficult to Define and Implement: AI is a complex and multifaceted field, and it can be hard to come up with regulations that effectively address all potential issues. In addition, implementing and enforcing these regulations can be a challenge.
5. Premature Regulation: As AI is still a rapidly evolving field, some argue that it is premature to regulate it heavily. It may be better to wait until the technology matures and we have a better understanding of its potential risks and benefits.
6. Self-regulation and Ethical Guidelines: Many in the AI community argue for self-regulation guided by ethical principles, rather than government-imposed regulation. Many professional organizations and companies have developed their own ethical guidelines for AI research.
However, it's important to note that the lack of regulation could also lead to serious issues, such as misuse of AI, privacy violations, bias and discrimination, job displacement, and even potential existential risks. Therefore, a balance needs to be struck between ensuring the freedom to innovate and protecting society from potential negative impacts of AI.
That's some pretty double-u-eee-effish on-the-one-hand jive right there.
At the end, yes, but the points aren't too bad
The more you drill down into the responses, the better info bubbles. Initially it is a bit generic.
I just used Chat GPT to develop a six week course schedule and resources to learn more about AI.
My general view on using ChatGPT for writing is that it requires two skills, one new, one old. The new skill is the ability to write an effective spec question, The old one is the ability to edit.
But for other uses...a business partner uses it to write Excel formulae, and to convert info on some web pages into usable Excel tables.
Nothing about how AI will capture the regulators?
Or if AI decides humanity is the enemy and well, you've watched the Terminator series movies and the tv movie "The Forbin Project". Not that I'm trying to scare you but I am trying to scare you.
AI could very well mean the end of humanity as we know it.
A more circumspect Montgomery noted that current law covers many areas of concern with respect to the safety and misuse of new A.I. technologies.
NEVER trust a corporate officer who believes 'current law covers many areas of concern' for some new product they are developing. NEVER. It is already corrupt as hell that these folks are testifying to the group of corrupt critters (who are also basically stupid).
IDK how to proceed with AI. But whatever we are doing is gonna become a clusterfuck.
"and it takes 12 to 15 years for a new drug to get from the lab bench to a patient's bedside, thanks to the FDA"
Unless all of congress has stock in a completely new technology, passes total immunity laws, changes the definition of "vaccine", and tries to force the entire country to buy the damn junk, Then we are talking weeks, not years.
OMG, they changed the definition of “aircraft” to include helicopters!
No they didn't.
aircraft
âr′krăft″
noun
A machine or device, such as an airplane, helicopter, glider, or dirigible, capable of atmospheric flight.
It always included any machine capable of flight.
But you had to rush in to defend corporate collusion with a favored corporation.
Is vitamin D a vaccine?
Discuss.
No. A vaccine is a substance introduced to the body to strengthen immunity to a specific disease.
But, hey, I don’t control the definition of the word, vaccine, so if you can get enough people saying that Vitamin D is a vaccine, then, congratulations, the word “vaccine” now encompasses Vitamin D! That’s how words in the English language work.
https://www.seattletimes.com/seattle-news/politics/wa-senate-votes-to-raise-penalties-for-drug-possession-criminalize-public-use-of-drugs/
Looks like Google Bard will give better features than Chat GPT
I wonder..... if a government agency attempts to "regulate" AI, what will the black-market of AI look like? Because, well, you know, it WILL happen.
This is as hilarious as that time we tried to ban exporting math. How on earth are you supposed to "regulate" software creation? People will write the software. Someone will train a more powerful AI model. They might not be able to sell it (in the US), but it will exist somewhere and because the internet connects everything its applications will apply within the US and nothing can be done about it. Congress might have more luck regulating the tides and the wind.
Doesn't matter. Givermint wants a new bureaucracy, more power and more money.
We share educational information regarding college offers and scholarships. kindly visit our website for further details.
website>>>>>>>>> http://collegeoffers.org/
An FDA style agency for AI? That's a joke, right? I mean, you've got to be kidding,..... you can't be serious...... another government bureaucracy is just what the nation needs. Of course we'll have to raise taxes to pay for it but you won't mind, will you.
How about another solution: all those computer systems involved in this are destroyed and tossed into the dumpster. along with Bill Gates....they can keep him company.
Google is by and by paying $27485 to $29658 consistently for taking a shot at the web from home. I have joined this action 2 months back and I have earned $31547 in my first month from this action. I can say my life is improved completely! Take a gander at it what I do.....
For more detail visit the given link..........>>> http://Www.jobsrevenue.com
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM