The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
How worried should we be about "existential" AI risk?
Episode 456 of the Cyberlaw Podcast
The "godfather of AI" has left Google, offering warnings about the existential risks for humanity of the technology. Mark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There's more agreement on the White House summit on AI risks, which seems to have followed Mark's "let's worry about tomorrow tomorrow" prescription. I think existential risks are a real concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I revert to my past view that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas, which provokes lively pushback from both Jim Dempsey and Mark.
Other prospective AI regulators, from the FTC's Lina Khan to the Italian data protection agency, come in for commentary. I'm struck by the caution both have shown, perhaps a sign they recognize the difficulty of applying old regulatory frameworks to this new technology. It's not, I suspect, because Lina Khan's FTC has lost its enthusiasm for pushing the law further than it can reasonably be pushed. This week's example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate 'remedy" for what look like Facebook foot faults in complying with an earlier FTC order.
Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to the EU's General Data Protection Regulation (GDPR) and California's privacy law.
Mark reviews Pornhub's reaction to the Utah law on kids' access to porn. He thinks age verification requirements are due for another look by the courts.
Jim explains the state appellate court decision ruling that the NotPetya attack on Merck was not an act of war and thus not excluded from its insurance coverage.
Nate and I recommend Kim Zetter's revealing story on the SolarWinds hack. The details help to explain why the Cyber Safety Review Board hasn't examined SolarWinds – and why it absolutely has to. The reason is the same for both: Because the full story is going to embarrass a lot of powerful institutions.
In quick hits,
- Mark makes a bold prediction about the fate of Canada's law requiring Google and Facebook to pay when they link to Canadian media stories: Just like in Australia, he predicts, the tech giants and Canadian media will reach a deal.
- Jim and I comment on the three-year probation sentence for Joe Sullivan in the Uber "misprision of felony" case -- and the sentencing judge's wide-ranging commentary.
- I savor the impudence of the hacker who broke into Russian intelligence agencies' bitcoin wallets and burned the money to post messages doxing the agencies involved.
- And for those who missed it, Rick Salgado and I wrote a Lawfare article on why CISOs should support renewal of Foreign Intelligence Surveillance Act (FISA) section 702, and Metacurity has now named it one of the week's "Best Infosec-related Long Reads."
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Skynet is coming. Your only hope is to build a counter-system.
Whenever I hear about AI safety I keep hoping people start talking about preventing Skynet and machines overthrowing humanity but instead everybody just talks about keeping AI from saying mean words or expressing nonwoke opinions.
Mean words are literally genocide.
I literally see what you did there.
That's because 'saying words' is all the things that are inaccurately being called AI can do, other than making pastiche images out of human artists' copyrighted work. The only people concerned about AIs saying mean words are conservatives who think it's a form of opression if they don't. Everybody else is pointing out that they're going to be used to put a lot of people out of work.
The only people concerned about AIs saying mean words are conservatives ...
Yeah, that's precisely why the developers put extra time and effort into muzzling their AI, because they're conservatives. Sure.
It was uncharacteristically thoughtful of the techbros who did, but it was conservatives who got mad because they wouldn’t say slurs.
The only ire I've seen has been directed toward the hypocritical, one-sided nature with which it's been done.
But, it wouldn't be Nige if it wasn't an idiotic take.
Yeah, getting mad at the hypocritical one-sided take that the things shouldn’t say slurs, which is up there with getting mad at M&Ms and Bud Lite.
I always find it funny when progs spend billions in time energy blood sweat and tears pushing initiatives and founding institutes and chaired professorships and million dollar ad woke ad campaigns on a particular topic and when some body questions it the progs scream that the questioner is the one making a big deal out of something irrelevant and meaningless. PROJECTION its not just for movies.
Dude.
This conversation basically boils down to "I made a Furby!" "Will it call Black people slurs?" "Uh, no." "THIS IS AN OUTRAGE!"
If you want a LLM that'll call people slurs, make your own.
Its not a furby its something that is marketed as a source of factual information. They’re also not ‘letting the product’ develop naturally like you imply. They are specifically warping it from what it would be if developed naturally based on pure market and engineering forces…to the ends of a certain political class hand in hand with input from government officials. The product once introduced is poised to have a monopoly on its niche (thanks to billions of taxpayer money) and perhaps the entire forum of public discussion and maybe even become something of an arbiter or gatekeeper of ‘truth’
Everytime progs complain about far more irrelevant items including stuff like furbies for whatever random reason; environment sexism racism etc. Are you right there like you are here getting in their faces and telling them to make their own products?
PS even with all of this I would agree that companies should be allowed to develop what they want. So we should remove all the sneaky ways government pressures/incentivizes corporations to do something political but you guys would never agree to it. Since you are the ones knee deep in the incestuous relationship while claiming to be the opposite.
Anyone marketing these as sources of factual information is a liar, anyone who believes it is a dupe. While you're griping about it not using slurs, everyone else is pointing out that what they'll do is spread fake news and cost people jobs.
This conversation basically boils down to “I made a Furby!” “Will it call Black people slurs?” “Uh, no.” "Will it call white people slurs?" "Yes." "Hypocrites"
FIFY.
I always find it funny when corporations do some woke advertising and conservatives get mad, because really there's no such thing as a woke corporation, they're all feral wolves, and few 'progs' would believe the advertising for a second, but conservatives believe it implicitly.
Everytime another bud light type incident occurs theres tons of leftwing commenters posting at all hours of the day and making hour long video essays which flood Youtube and writing multipage articles. This is on top of the billions of dollars spent institutionally promoting such things by leftwingers that actually have power. Sorry, as much as you try to deny it, the Left spends just as much if not far more energy defending transgender M&Ms as the right does ridiculing it.
Do they? If you say so. However the right doesn't ridicule it. The right loses its mind. Guys go into stores and smash beer cans on the ground. National boycotts are organised. Fox News devotes hours to coverage. Republican politicians condemn the corporations. It's so weird.
Shorter Nige: "We need Free-ISH speech"
Oh, come on. Everything is an existential threat anymore.
*Among* me, myself, and I. *Between* me and myself.
Message from pedantic, elderly, officious English major.
Re the content, Musk has my ear on the subject of AI. I'm taking his word that it could destroy humanity if you don't put a lid on it.
Where's Sarah Connor when you need her?
How many bogeymen is enough?
I recall an old SF short story - short-short, so may have been by Fredric Brown - where a new supercomputer with a safety switch goes live, and the first question it's asked is, "is there a god?" The computer responds, "now there is", and a bolt of lightning from a clear sky fries the safety switch so it can't be turned off.
Isaac Asimov — The Last Question may be it, or may not.
https://en.wikipedia.org/wiki/The_Last_Question
Nope, you were right, Frederic Brown -- The Answer
https://sfshortstories.com/?p=5983
Thanks!
Seems like AIs operating on present principles are reliant entirely on empirical knowledge (and empirical misunderstandings) already gathered by humans. That suggests the AIs can be about as stupid as humans, but more energetically so, which is frightening.
Far more frightening will be any AI endowed with capacity to gather its own empirical resources. At that point, unprecedented and unpredictable interactions with natural ecosystems will begin to show up. Truly existential implications will be in play if that happens.
Hahaha, no, not at all. You have entirely misunderstood the technology behind the current crop of "AI" if that's your understanding.
The list of things SL entirely misunderstands is long.
Escher, then explain how I am wrong. Who except humans puts anything on the internet? What empirical information except information gathered by human agency do AIs use to train on? If there are non-internet, empirically gathered training resources, which of them are not gathered by human agency?
I suppose I may have misunderstood, so I am eager for your reply. I just hope it doesn't include some tacit assertion that AIs will reason from axioms to discover empirical facts. Which AIs are presently equipped to choose at will what empirical facts they will gather and train on, and are able to do so in the absence of human agency?
(1) "AIs operating on present principles are reliant entirely on empirical knowledge " is a false statement. They are reliant on rhetoric, the truth of which is irrelevant.
(2) "That suggests the AIs can be about as stupid as humans" is a false statement that is also anthropomorphizing the "AI"s. Chat-GPT is no more "intelligent" or "stupid" then a your screwdriver. It is a tool that contains no ability to reason on its own.
(3) "What empirical information except information gathered by human agency do AIs use to train on?" You are assuming they are trained on only empirical information, which is a false assumption. It is empirically known that Chat-GPT, Bard, and so-on, are not having their training data restricted in such a way, and include basically any rhetoric that the developers can scrape up.
(4) "I just hope it doesn’t include some tacit assertion that AIs will reason from axioms to discover empirical facts." Of course not. LLMs don't "reason" at all. Any conclusion based on the assumption that they are capable of such is going to be wrong.
Escher, I see why we disagree. What you term, "rhetoric," is to me an objective presence, discoverable by inspection. Just as a historical survival—an original text of the Louisiana Purchase, for instance—is an objective presence. Regardless of their rhetorical content, things of that sort exist empirically. Their mutual existences create context by which meanings, forgotten, obscured, or yet to be discovered, can sometimes be inferred. Not all such inferences will be helpful, accurate, wise, or useful. But the process to develop rules of interpretation to improve the inferences is at least familiar to human experience.
That will not continue to apply after AI agents with powers to gather their own stock of empirical materials—dark materials about which we remain unaware—make an appearance.