We Absolutely Do Not Need an FDA for AI
If our best and brightest technologists and theorists are struggling to see the way forward for AI, what makes anyone think politicians are going to get there first?
I don't know whether artificial intelligence (AI) will give us a 4-hour workweek, write all of our code and emails, and drive our cars—or whether it will destroy our economy and our grasp on reality, fire our nukes, and then turn us all into gray goo. Possibly all of the above. But I'm supremely confident about one thing: No one else knows either.
November saw the public airing of some very dirty laundry at OpenAI, the artificial intelligence research organization that brought us ChatGPT, when the board abruptly announced the dismissal of CEO Sam Altman. What followed was a nerd game of thrones (assuming robots are nerdier than dragons, a debatable proposition) that consisted of a quick parade of three CEOs and ended with Altman back in charge. The shenanigans highlighted the many axes on which even the best-informed, most plugged-in AI experts disagree. Is AI a big deal, or the biggest deal? Do we owe it to future generations to pump the brakes or to smash the accelerator? Can the general public be trusted with this tech? And—the question that seems to have powered more of the recent upheaval than anything else—who the hell is in charge here?
OpenAI had a somewhat novel corporate structure, in which a nonprofit board tasked with keeping the best interests of humanity in mind sat on top of a for-profit entity with Microsoft as a significant investor. This is what happens when effective altruism and ESG do shrooms together while rolling around in a few billion dollars.
After the events of November, this particular setup doesn't seem to have been the right approach. Altman and his new board say they're working on the next iteration of governance alongside the next iteration of their AI chatbot. Meanwhile, OpenAI has numerous competitors—including Google's Bard, Meta's Llama, Anthropic's Claude, and something Elon Musk built in his basement called Grok—several of which differentiate themselves by emphasizing different combinations of safety, profitability, and speed.
Labels for the factions proliferate. The e/acc crowd wants to "build the machine god." Techno-optimist Marc Andreessen declared in a manifesto that "we believe intelligence is in an upward spiral—first, as more smart people around the world are recruited into the techno-capital machine; second, as people form symbiotic relationships with machines into new cybernetic systems such as companies and networks; third, as Artificial Intelligence ramps up the capabilities of our machines and ourselves." Meanwhile Snoop Dogg is channeling AI pioneer-turned-doomer Geoffrey Hinton when he said on a recent podcast: "Then I heard the old dude that created AI saying, 'This is not safe 'cause the AIs got their own mind and these motherfuckers gonna start doing their own shit.' And I'm like, 'Is we in a fucking movie right now or what?'" (Hinton told Wired, "Snoop gets it.") And the safetyists just keep shouting the word guardrails. (Emmett Shear, who was briefly tapped for the OpenAI CEO spot, helpfully tweeted this faction compass for the uninitiated.)
wake up babe, AI faction compass just became more relevant pic.twitter.com/MwYOLedYxV
— Emmett Shear (@eshear) November 18, 2023
If even our best and brightest technologists and theorists are struggling to see the way forward for AI, what makes anyone think that the power elite in Washington, D.C., and state capitals are going to get there first?
When the release of ChatGPT 3.5 about a year ago triggered an arms race, politicians and regulators collectively swiveled their heads toward AI like a pack of prairie dogs.
State legislators introduced 191 AI-related bills this year, according to a September report from the software industry group BSA. That's a 440 percent increase from the number of AI-related bills introduced in 2022.
In a May hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, at which Altman testified, senators and witnesses cited the Food and Drug Administration and the Nuclear Regulatory Commission as models for a new AI agency, with Altman declaring the latter "a great analogy" for what is needed.
Sens. Richard Blumenthal (D–Conn.) and Josh Hawley (R–Mo.) released a regulatory framework that includes a new AI regulatory agency, licensing requirements, increased liability for developers, and many more mandates. A bill from Sens. John Thune (R–S.D.) and Amy Klobuchar (D–Minn.) is softer and more bipartisan, but would still represent a huge new regulatory effort. And President Joe Biden announced a sweeping executive order on AI in October.
But "America did not have a Federal Internet Agency or National Software Bureau for the digital revolution," as Adam Thierer has written for the R Street Institute, "and it does not need a Department of AI now."
Aside from the usual risk throttling of innovation, there is the concern about regulatory capture. The industry has a handful of major players with billions invested and a huge head start, who would benefit from regulations written with their input. Though he has rightly voiced worries about "what happens to countries that try to overregulate tech," Altman has also called concerns about regulatory capture a "transparently, intellectually dishonest response." More importantly, he has said: "No one person should be trusted here….If this really works, it's quite a powerful technology, and you should not trust one company and certainly not one person." Nor should we trust our politicians.
One silver lining: While legislators try to figure out their priorities on AI, other tech regulation has fallen by the wayside. Regulations on privacy, self-driving cars, and social media have been buried by the wave of new bills and interest in the sexy new tech menace.
One thing is clear: We are not in a Jurassic Park situation. If anything, we are experiencing the opposite of Jeff Goldblum's famous line about scientists who "were so preoccupied with whether or not they could, they didn't stop to think if they should." The most prominent people in AI seem to spend most of their time asking if they should. It's a good question. There's just no reason to think politicians or bureaucrats will do a good job answering it.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Musk’s house has no basement.
Welcome to the new year, same as the old year.
Israel is using AI to murder civilians on a sickening scale. Talk about that, shanda.
Misek doesn’t understand how AI works right now.
Has the death count at the hands of IDF reached the death count at the hands of Hamas on October 7th yet? If not, please let me know when you think it has. Then we can talk.
a.heroic.asshole has no brain.
He’s a.heroic.dose.of.shit.
What the AI does is what the AI does, Israel has no part in it.
Which factions wanted to regulate and severely limit the printing press?
Marxists
I have trouble recognizing rhetorical questions.
Catholics.
I think that AI development is a natural process, and will have both good and bad impacts on humans. The most important thing is to recognize and limit the bad things before apply into life
There’s no danger to specialised AI. It’s not like anyone’s developing AGI. All AI tools do is generate text, images or sound when prompted. There’s no way that could harm humanity.
Any more than the WaPo and Hollywood?
The appearance of AI being intelligent on not trained will lead to many dumb and ignorant people believing their narratives as truth instead of the truth. Just like people bias googles first few links as being the truth.
No it shouldn’t be regulated.
The appearance of AI being intelligent on not trained will lead to many dumb and ignorant people believing their narratives as truth instead of the truth.
So what’s your excuse then?
You tell us, Jeffy. What’s your excuse, since you did ask first?
All fails when you realize there’s no dividing line between AI and non-AI. Automation’s been using heuristics for a long time, and none of it needs to be labeled “AI”.
A marxist AI with a dash of liberteen sprinkled on top could replace some on the editors here. Would it be that challenging for the AI to search Twitter et. al. social media sites for “the news?”
ChatGPT use libertarian thought to defend my leftist bias.
Must be a small language model emulating a tiny mind.
“I’m supremely confident about one thing: No one else knows either.”
This reminds me of the annual celebrity astrologer predictions for the year. Or perhaps the ads for financial advisers who made their clients and themselves rich a few years ago. The old adage that hindsight is always 20/20 may not be true, but it illustrates the principle that SOMEONE (we don’t know WHO yet) will turn out to have been right all along. It is probably true that almost every technological and social improvement in world and human history has been a mixed blessing at the time. I am supremely confident that AI (whatever it is) will also turn out to be a mixed blessing.
The scary thing about large language models is that they only think they are hallucinating.
https://vvattsupwiththat.blogspot.com/2023/12/communication-climate-with-hegemony-and.html
Spam site.
Fuck off and die spamming cunt
Fuck off and die in a fire spamming cunt
Regarding petroleum use, vvhat vvonderful vvoke vvhimsical vvining vvidgets vvould vveen vvestern vvorld vvreckers?
And vvuck the cunt P. E., right?
A “pause” in AI, or regulation of AI, even if it were a good idea, wouldn’t actually work. It would just drive AI research and development to places where there weren’t such pauses or regulations.
But *IF* such a “pause” or regulation were a good idea, one outcome of such a scheme ought to be a reformation of the educational curriculum to take into account not just AI, but computers and the Internet in general, quite frankly. The educational model in this country is over 60 years old and needs to be overhauled. I think we all are aware of “kids these days” who can’t do basic tasks like balancing a checkbook, or simple mental arithmetic (like counting change), or simple tasks that would have been taken for granted 20 years ago. Why is that? It’s easy to blame the teachers, but really IMO the problem is deeper than that. I think it is because computation and the Internet have made access to time-saving tools much more ubiquitous, so that students have not had to reinforce and deepen their knowledge of these basic subjects. Why learn how to do mental math when you always have a smartphone in your pocket that will always do the math for you? And because students don’t have those skills reinforced, because the computer always does the math for them, they don’t learn those skills how they should.
Here is a case in point: In general, students start learning their multiplication tables in second grade. But even if students earn perfect marks for their multiplication exams in second grade, they don’t *really* develop their multiplication skills deeply and concretely until they are asked to apply their skills in all of their subsequent math classes. And if the Internet and smartphones permits students to take “shortcuts” in doing their math homework in later grades, then they never develop their multiplication skills as deeply as they should. So even if multiplication is taught the exact same way in second grade, it won’t help if students don’t later use and deepen those skills later on.
And now apply this same analysis to all fields, not just ones that rely on mathematical computation. That is what AI promises. So students may learn the grammatical rules of how to construct a sentence in an early grade, but then in later grades when asked to apply those rules in, say, writing a term paper, if they can get ChatGPT to do it for them, they don’t deepen their knowledge of the English grammar rules that they learned earlier.
So we have to have an educational system that takes this reality into consideration. Where these time-saving tools exist, but students who are still new learners can’t be asked to be using them as crutches while they are still developing their skills. This is a difficult problem that goes beyond just “what to do about AI” and requires a radical rethinking of how education is conducted in this country. Frankly, it would be better IMO if there were less reliance on technology from the learner’s perspective.
Nobody knows how to properly use a buggy whip these days.
They sent people to the moon with slide rules.
Myth
During the Apollo missions, an on-board computer and large computers on Earth performed the critical guidance and navigation calculations necessary for a successful journey. In addition, crews carried a slide rule for more routine calculations.
https://airandspace.si.edu/collection-objects/slide-rule-5-inch-pickett-n600-es-apollo-13/nasm_A19840160000#:~:text=During%20the%20Apollo%20missions%2C%20an,rule%20for%20more%20routine%20calculations.
Fuck buggy whips, when are scented candles going to finally get obsoleted?
So we have to have an educational system that takes this reality into consideration.
Or we could just ban, expel and fail students who use those tools.
Or we could just ban, expel and fail students who use those tools.
The future will look back on those tools like we look back on calculators and personal computers.
edit works!
Good luck trying to enforce that.
It’s pretty easy, but would require that we give up on universal education.
What is the point of education if not to teach people to use the tools at their disposal?
You can’t program a scientific calculator to plot a curve if you don’t understand the math behind it. Are they cheating?
People talking about AI giving us 4 hour work weeks are the same useless pajama-class retards who thought everyone could just work from home during the pandemic.
Whole meals in pill form!
But on a serious note, I can see a SHIT ton of people in the Journolist class who could be rolled back to a zero hour work week with the proper application of AI.
The sorts of people who do jobs so useless that they can be completely automated by a text generator will be out of work for sure. That includes most journalists. The only job for humans in the newsroom of the future will be fact-checking, assuming that’s deemed important enough for a particular publication to do. Doubtless, many publications will just spew whatever the fuck out there regardless of its truth and move on to the next story much the same as they do today.
Speaking of education.
What would you call it if elected officials supervising a state’s education system worked directly with an instructional media company to promote and encourage the use of that company’s products in the classroom, products that just so happened to produce an ideological message that comported with the ideology of the elected officials’ party? Would you call it “indoctrination”?
https://www.nbcnews.com/news/education/prageru-conservative-videos-classrooms-republican-officials-help-rcna131613
all public education is indoctrination
Public education.
This goes away once taxes for education dies.
Yes. Even if I agree with some of the conclusions that those PragerU clips/infographics reach.
What would you call it if elected officials supervising a state’s education system worked directly with an instructional media company to promote and encourage the use of that company’s products in the classroom,
Umm, I’d call it modern education. How do you think the whole Climate Change/LGBTQI2MAP+ culture war shit is taught in schools? Or are you pretending to just discover something awful now that it runs a counter narrative?
At one time, KM-W believed we absolutely do not need an FDA for food or drugs.
and she was right
Asking the approval of the FDA for products and services should be voluntary.
How will we “know what’s in them” though?
I find it difficult to believe ANY intelligent person would hold up the NRC, let alone the FDA as positive examples of Government oversight and regulation – I suspect the author misconstrued a tweet.
Moving on from the abject failures of both the NRC and FDA, I think I’ve got a bead on what is important near-term – vetting.
Who the bad-words is vetting the AIs? The near-term danger we face is flawed AI being used by unsophisticated users to make life-altering decisions. Just imagine the power of low-level government employees, backed by the immense power of the State, to choose who gets audited, who gets a loan, who gets a break on insurance, who is allowed to travel, who gets paroled, who is found innocent, etc.
The media, as usual, has got it wrong. The first problem we face with with AI isn’t SkyNet (Terminator), the first problem is vetting the AI applications before they are in the hands of unsophisticated users.
We pay attention to Dr. Jordan Peterson because of his education, his teaching, his publishing, his clinical work and his public speaking. About 35 years of “work experience” we can reference and review. Peterson, like him or not, is vetted.
Finally, who is vetting the AI?
Who is vetting the AI? The users.
It would help if I could detect if you’re being humorous.
IF
We keep Government away, AI can evolve humanity.
ELSE
Government regulation will stifle innovation (think NRC) and guarantee that the most powerful technology in human existence will be controlled by other countries and our own, not-to-be-trusted Government military.
Our history is rife with examples of our Government “keeping us safe” by fucking over our civil rights so that some corporations can make a buck.
We pay attention to Dr. Jordan Peterson
We do?
Substitute your favorite – I care not. Hopefully you are capable of understanding “the point”. Are you?
Or are you doing drive-bys to engage in a little verbal jujitsu?
You must be new here. Jeffy is a brainless sea lion.
There should also be another headline without the last two words in this one.
Regulations won’t stop me from publishing skynet if I can ever get this fking project finished.
It’s just more “regulatory capture” of the press. Does anyone really thing government is interested in AI service in any other business that may or may not prove to be effective and will be easily found out as it’s tried??? Oh no; The governments only interest is in AI prank calling, scamming and dumping unprecedented amounts of Marxist indoctrination onto the general public at an alarming rate for their [Na]tional So[zi]alist Empire take-over of the USA.
Exactly how they have done it to the mindless drones of mass media puppets.
As long as you can pull the plug I don’t understand what the big baby deal is. AI goes nuts, pull the plug. No power no problem.
Here is the wisest thing I have heard about AI so far, which ties in with what you just said here:
Steven Pinker Demolishes AI Apocalypse Myths #shorts #AI https://youtube.com/shorts/0Ky8MN_O0co?si=9lmcdyKB2Yv9zRqV
Hurray! The “Edit” button works! Someone at the Reason tech support is the brains behind the operation!
🙂
😉
I don’t find what Steven Pinker said at all insightful. He’s dueling with a straw man.
Perhaps not an FDA, but some modest export restrictions? Against, say, a violent ethnostate that has been conducting illegal surveillance of illegally occupied people and is currently using AI to more efficiently murder civilians?
“Zyklon will help you get your Zzzzs!”
Fuck Off, Nazi!
This is dumb. And it’s a misuse of the term “AI.”
What we’re calling AI right now is little more than a super fancy encyclopedia. Instead of going to a shelf and finding the big book with an I on it to look up Iceland so you can read about it and write a paper on the subject, you just ask a computer to tell you about Iceland and write the paper for you. Only difference between the two is that YOU don’t actually learn anything about Iceland, but you do get a paper written on the subject.
We see the same phenomenon with calculators. Every time I see someone whip out their cell phone to calculate the tip for their meal, it’s the same thing. You’re outsourcing your ability to know something to a machine which can instantly report it TO you.
This is all just stupid fearmongering for the purpose of asserting control over a growing technology. We’re not talking about computers that have independent thought and self-reasoning and preferences. What you should be worried about is the increased reliance on machines diminishing the human knowledge/capability base.
What we’re calling AI right now is little more than a super fancy encyclopedia.
The ones we talk to, yes.
What makes AI different from regular programming is that regular programming requires programmers programming every possible scenario. AI learns. Do this and you crash. Do that and you don’t. Try lots of this and that.
AI learns.
Kind of.
It doesn’t learn like humans learn.
Yet modern Libertarianism Plus tells me we need an FDA for legalized drugs because *checks historical notes* that’s how “we’ll know what’s in them” once they’re all legalized.
Forget the ‘for AI’ part. We Absolutely Do Not Need an FDA, period.
Nice information… https://titlesprint.com/