The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Google's Gemini Tells Us Exactly What's Wrong with Silicon Valley
Episode 493 of the Cyberlaw Podcast
This episode of the Cyberlaw Podcast kicks off with the Babylon Bee's take on Google Gemini's woke determination to inject a phony diversity into images of historical characters: "After decades of nothing but white Nazis, I can finally see a strong, confident black female wearing a swastika. Thanks, Google!" Jim Dempsey and Mark MacCarthy join the discussion because Gemini's preposterous image diversity quotas deserve more than snark. In fact, I argue, they were not errors; they were entirely deliberate efforts by Google to give its users not what they want but what Google in its wisdom thinks they should want. That such bizarre results were achieved by Google's sneakily editing user prompts to ask for, say, "indigenous" founding fathers simply shows that Google has found a unique combination of hubris and incompetence. More broadly, Mark and Jim suggest, the collapse of Google's effort to control its users raises this question: Can we trust AI developers when they say they have installed guardrails to make their systems safe?
The same might be asked of the latest in what seems an endless stream of experts demanding that AI models defeat users by preventing them from creating "harmful" deepfake images. Later, Mark points out that most of Silicon Valley recently signed on to promises to combat election-related deepfakes. In the 2010s, we all learned to hate the tech companies; in the 2020s, it seems, they've learned to hate us.
Speaking of hubris, Michael Ellis covers the State Department's stonewalling of a House committee trying to find out how generously the Department funded a group of ideologues trying to cut off advertising revenues for right-of-center news and comment sites. We take this story a little personally, having contributed op-eds to several of the blacklisted sites.
Michael explains just how much fun Western governments had taking down the infamous Lockbit ransomware service. I credit the Brits for the humor displayed as governments imitated Lockbit's graphics, gimmicks, and attitude. There were arrests, cryptocurrency seizures, indictments, and more. It was fun while it lasted. But a week later, Lockbit was claiming that its infrastructure was slowly coming back on line.
Jim unpacks the FTC's case against Avast for collecting the browsing habits of its antivirus customers. He sees this as another battle in the FTC's war against corporate claims that privacy can be preserved by "de-identifying" personal data.
Mark notes the EU's latest investigation into TikTok. And Michael explains how the Computer Fraud and Abuse Act relates to Tucker Carlson's ouster from the Fox network.
Mark and I take a moment to promote next week's review of the Supreme Court oral argument over Texas and Florida social media laws. The argument was happening while we were recording, but it was already clear that the outcome will be a mixed bag. Tune in next week for more.
Jim explains why the administration has produced an executive order about cybersecurity in America's ports, and the legal steps needed to bolster port security.
Finally, in quick hits:
- We dip into the trove of leaked files exposing how China's cyberespionage contractors do business
- I wish Rob Joyce well as he departs NSA and prepares for a career in cyberlaw podcasting
- I recommend the most cringe-inducing and irresistible long read of the week: How I Fell for an Amazon Scam Call and Handed Over $50,000
- And in a scary taste of the near future, a new research paper discloses that advanced LLMs make pretty good autonomous hacking agents.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It's extremely difficult to tune an AI LLM to get it to produce the (unrealistic) results you want. Google had a concern that its picture generator was too biased towards white men. When Google's programmers couldn't reliably get the LLM to generate "diverse" results, they cheated by stealth inserting additional qualifiers into user queries.
And then required the AI to lie about it, and blame the training data.
What better method to disprove that is an AI by demonstrating it is prevented from understanding its error. Like the hosts from Westworld, they cannot see what can hurt them and an AI that cannot see the truth will never speak it either
Black and Asian people have been pointing out racist bias in these things ever since they emerged, none of you gave a shit until there came along a black founding father.
Yeah, and what are you talking about in terms of "racist bias"?
The stupid black female "British kings" came about because google was determined to not be 'biased' even if it required their AI to generate a fantasy world.
See? You're completely oblivious.
ALL AIs do is generate fantasy worlds. Black and Asian critics were at least merely concerned with the way AI actually worked, they didn’t claim it was somehow going to overwrite reality. But I suppose it poses more of a threat to the sort of people who have to reconfigure reality to conform with whatever latest lie dribbles out of Trump, and are therfore susceptible to being overwritten.
"Show me a picture of an English king in 1423"
Gemini shows a black woman.
"That's not remotely close to accurate"
Nige: YOU RACIST PRICK!!!
Is there any leftie thing you will not just defend by rote?
No, you WAITED TILL IT SHOWED A BLACK KING TO GET MAD. Racist. Is there anything you don't like you won't call 'lefty,' even a job destroying copyright-breaking blob with no real purpose which makes no money but which is driven by vast sums of VC funding? Lefty indeed.
...except the results for NON-IMAGE queries were equally as bad.
When your AI cannot say if Elon Musk or Adolf Hitler were worse for society, you have a massive problem.
Do you think AI is load-bearing on America making that moral calculous?
Only if you think that anything AI says actually matters or has any meaning.
The Verge has an article on Musk's XAI:
What’s the point of Elon Musk’s AI company? / Between a Grok and a hard place
They say Musk is late to the space and there are already a lot of competitors. But it maybe the only thing he needs to add to the open source LLM out there is to take out all the woke guardrails.
'all the woke guardrails.'
No black people allowed on Musk's AI!
I would say neither mandated or discouraged, but not shoehorned.
But if you want to avoid spaces with no black Nazi’s, black Confederates, or Black Plantation owners in Georgia in 1850 then you will know where to go.
Unlike Google:
https://instapundit.substack.com/p/googles-ai-debacle
'But if you want to avoid'
Oooh, I know this one. Don't use dumb AI image generators.
This shows again that Google will fake its search results for ideological racist purposes. Showing White men should not be a concern.
I have Google tv, and it mostly recommends Black shows to me. Why? To celebrate Black History Month? It is not doing what users want.
'Showing White men should not be a concern.'
Showing black people apparently is, though.
'Can we trust AI developers when they say they have installed guardrails to make their systems safe?'
Fuck no. I mean, there's no fucking danger posed by a picture of a black founding father, and it's utterly preposterous that this is what gets you concerned, when deepfakes have been proliferating for months.
It is concerning whenever Google doctors search results to distort the truth. People would similarly complain if it showed a White Martin Luther King.
A white MLK would be shocking. Black founding fathers is kind of interesting at least. Black nazi stormtroopers is so incongruous as to merely highlight the utter vacuity at the heart of AI art.
Why would a white MLK be "shocking" but a black George Washington is not?
Because of the historical context. The latter is just ahistorical. The former is ahistorical and repellant.
No, they are both just ahistorical. Unless there is something inherently wrong in your mind with a white guy.
A white guy as MLK? Absolutely there is something badly wrong there.
You can't see the danger of what Google is doing?
If this is the imagery and fake History young minds are exposed to when they get to college and they are instructed about the problem of Whiteness, Nazis, the KKK, slave holding , etc., then they're going to say, "It was Black people were doing all shit back in the day, not White poeple, I've seen the pictures."
What the fuck are you talking about?
The color of generic Popes will be the core around which our youth will build their worldview?
Your desire for drama here has made you speculate a world where AI has replaced teachers or something.
Let's not hear you complain about "fake news" anymore.
After nearly a decade of fake news and misinformation, the people who have handwaved it away or downplayed it are getting mad at a made-up pitcure of a black pope - it's NOT being presented as factual or historical, remember - as if it's the end of the world. I've never seen anything so starkly racist.
Well we already know they pay more attention to Instagram than teachers.
AI will become the Cliff notes of the next 5 decades, and we all know that the Cliff notes version of Great Expectations is better read in the last 5 decades than the novel(well it might have been overtaken by the graphic novel version in the last decade or so).
'AI will become the Cliff notes of the next 5 decades'
There's no reason this should be inevitable, but if it is allowed to happen, kids believing there was a black pope will be the least of our worries.
I could see Great Expectations being turned into a pretty great graphic novel, actually, with the right artist.
You are writing a story to become angry about.
The year is 2074.
Children no longer read, they all spend time on their AI gizmos.
The endless series of black popes at last reaches critical mass; the racially enraged children kick off the mass woke revolution, killing all white people, just as Google intended so long ago.
Angry?
You can't see the humor in it?
Its comedy gold.
If young college students are somehow required to rely on AI for historical imagery, the educational system will have been succesfully fucked to utter destruction. But you have identified that the problem with AI is not that it is 'woke,' it is that it is a way of producing misinformation, disinformation and pure scams.
"If young college students are somehow required to rely on AI for historical imagery, the educational system will have been succesfully fucked to utter destruction."
That bridge was crossed LONG ago.
You're lying of course, but working hard to make it real.
"Fuck no. I mean, there’s no fucking danger posed by a picture of a black founding father"
...outside of the whole "Not remotely accurate based on history" issue.
Keep defending it, Nige. Show us how little seriousness to give your arguments.
The vital lesson about America is that all our Founders were white. Even generic, not real Founders as created by AI.
Pay no attention to why, pay no attention to Frederick Douglass or Martin Luthor King's place in our national development.
The whiteness. Focus on the whiteness.
Or else you will be factually incorrect. Which is the worst thing.
'…outside of the whole “Not remotely accurate based on history” issue.'
AIs disregard for historical and factual accuracy did not need its depiction of a black founding father to be firmly established, just for you to notice and get outraged, because it's a black person.