The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Sen. Schumer Tackles AI Regulation
Episode 464 of the Cyberlaw Podcast
Sen. Schumer (D-NY) has announced an ambitious plan to produce a bipartisan AI regulation program in a matter of months. Jordan Schneider admires the project; I'm more skeptical. The rest of our commentators, Chessie Lockhart and Michael Ellis, also weigh in on AI issues. Chessie lays out the case against panicking over existential AI threats, this week canvassed in the MIT Technology Review. I suggest that anyone complaining that the EU or China is getting ahead of the US in AI regulation (lookin' at you, Sen Warner!) doesn't quite understand the race we're running. Jordan explains the difficulty the US faces in trying to keep China from surprising us in AI.
Michael catches us up on Canada's ill-advised effort to force Google and Meta to pay Canadian media whenever a user links to a Canadian story. Meta has already said it would rather ban such links. The end result could be that even more Canadian news gets filtered through American media, hardly a popular outcome north of the border.
Speaking of ill-advised regulatory initiatives, Michael and I comment on Australia's threatening Twitter with a fine for allowing too much hate speech on the platform post-Elon.
Chessie gives an overview of the DELETE Act, a relatively modest bipartisan effort to regulate data brokers' control of personal data.
Michael and I talk about the growing tension between EU member states with real national security responsibilities and the Brussels establishment, which has enjoyed a 70-year holiday from national security history and expects the next 70 to be more of the same. The latest conflict is over how much leeway to give member States when they feel the need to plant spyware on journalists' phones. Remarkably, both sides think government should have such leeway; the fight is over how much.
Michael and I are surprised that the BBC feels obliged to ask, "Why is it so rare to hear about Western cyber-attacks?" Because, BBC, the agencies carrying out those attacks are on our side and mostly respect rules we support.
In updates and quick hits:
- I bring listeners up to date on how things turned out for the lawyers who filed a ChatGPT-hallucinated brief in federal court: Not well.
- Chessie flags the creation of a new Justice Department section in the National Security Division: Natsec Cyber
- Chessie also welcomes the growing recognition, some of it in cold, hard cash, for cyber security clinics.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
LOL, like he has a clue?
I think he does have a clue. Just another tool for filter and/or censor info
AI is highly manipulative by the programmer. An advanced search engine with the ability to write scrip. A good example the censorship in the google search engine is the recent Gas stove and asthma study which is near academic fraud and pure junk science.
do a google search for "gas stove asthma junk science" . there will be a 100+ hits all touting the study while only 2-3 hits mentioning the junk science.
I think he does have a clue. Just another tool for filter and/or censor info
AI is highly manipulative by the programmer. An advanced search engine with the ability to write scrip. A good example the censorship in the google search engine is the recent Gas stove and asthma study which is near academic fraud and pure junk science.
do a google search for "gas stove asthma junk science" . there will be a 100+ hits all touting the study while only 2-3 hits mentioning the junk science.
TASRA: a Taxonomy and Analysis of Societal-Scale Risks from AI
The elephant in room here is that AI can be used in war. Some argue that, ideally, mechanical drones could be pitted against one another in casualty-free battles that allow nations to determine who would win a war of lethal force, without having to actually kill any human beings. If taken no further, this would be a major improvement over current warfare practices. However, these capabilities are not technologically far from allowing the mass-killing of human beings by weaponized drones. Escalation of such conflicts could lead to unprecedented violence and death, as well as widespread fear and oppression among populations that have been targeted by mass killings.
“War” involves dead people - I think we’ve created the training set for AI to understand that…
I don't see how drones destroying drones achieves the goal of war, which is to overthrow leaders.
If that were true, war would tend to gravitate toward assassinations of enemy leaders. The goal of war is to deplete the enemy's ability to fight, which may not depend on their leadership. Killing the enemy leader might harden the enemy's resolve, and even replace them with a more effective leader. Drones against drones might just be a new form of proxy war.
Senator Schumer tackled _something_?! Why, this is indeed news worthy.
But the news of the day is that only one living American President is not a descendant of a slave owner. Congratulations to the man not named Barack -- and to all those in California who now realize that reparations are due from those of every color.
Throwing the Bullshit flag on this one, pretty sure Barry Hussein's Mammy's side of the fambily owned some if you go back far enough .
I'll sleep better knowing Chucky Schumer's on the case, maybe he can team up with Carlos Danger.
The Schmuck of Wall St still doing his thing.
The focus of the discussion revolves around the Taxonomy and Analysis of Societal-Scale Risks from AI (TASRA). One significant concern that arises is the potential military application of AI. There is an argument suggesting that it would be ideal to have mechanical drones engage in battles without casualties, essentially determining the outcome of wars based on lethal force without harming human lives. At a surface level, this concept could be viewed as a substantial improvement over current warfare practices. However, it is important to recognize that the technological advancements in this direction also bring us closer to the possibility of weaponized drones causing mass killings of human beings. If conflicts were to escalate, it could result in unprecedented levels of violence, death, fear, and oppression among populations that become targets of these mass killings.
It is worth noting that the concept of "war" inherently involves the loss of human lives, and by training AI systems, we might inadvertently provide them with the understanding of this grim reality. You can visit https://www.budgetingenterprise.com/