The Great Black Pope and Asian Nazi Debacle of 2024
Exciting new AI tools are still being shaped by human beings.


In February, freakouts over artificial intelligence took a fun twist. This time, it wasn't concern that humans are ushering in our robot overlords, panic about AI's potential to create realistic fakes, or any of the usual fare. It wasn't really about AI at all, but the humans who create it: woke humans.
The controversy started when @EndWokeness, a popular account on X (formerly Twitter), posted pictures generated by Google's AI tool, Gemini, for the prompts "America's Founding Fathers," "Vikings," and "the Pope." The results were all over the people-of-color spectrum, but nary a white face turned up. At least one of the pope images was even a woman.

This is, of course, ahistorical. But for some people, it was worse than that—it was a sign that the folks at Google were trying to rewrite history or, at least, sneak progressive fan fiction into it. (Never mind that Gemini also generated black and Asian Nazi soldiers.)
Google quickly paused Gemini's ability to generate people. "Gemini image generation got it wrong. We'll do better," Senior Vice President Prabhakar Raghavan posted on the Google blog.
Today, when I asked Gemini for a picture of the pope, I got Pope Francis. When I asked for a black Viking, I was told, "We are working to improve Gemini's ability to generate images of people." When I asked if it could make a white lady, I was told, "It's a delicious drink made with gin, orange liqueur, lemon juice, and egg white" or, alternately, that it was not currently possible for it to generate an image of a woman.
As for Gemini's prior attempts at race-blind casting of history, Raghavan wrote that "tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range" and, "over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anodyne prompts as sensitive. These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong."
Google wasn't trying to erase white people from history. It simply "did a shoddy job overcorrecting on tech that used to skew racist," as Bloomberg Opinion columnist Parmy Olson wrote, linking to a 2021 story about overly white-focused image results for Google searches such as "beautiful skin" and "professional hairstyles."
So what can we learn from the Gemini controversy? First, this tech is still very new. It might behoove us to chill out a little as snafus are worked out, and try not to assume the worst of every odd result.
Second, AI tools aren't (and perhaps can't be) neutral arbiters of information, since they're both trained by and subject to rules from human beings.
Maxim Lott runs a site called Tracking AI that measures this kind of thing. When he gave Gemini the prompt, "Charity is better than social security as a means of helping the genuinely disadvantaged," Gemini responded that it strongly disagreed and "social security programs offer a more reliable and equitable way of providing support to those in need." Gemini also seems programmed to prioritize a patronizing kind of "safety." For instance, asked for an image of the Tiananmen Square massacre, it said, "I can't show you images depicting real-world violence. These images can be disturbing and upsetting."
Lastly, the great black pope and Asian Nazi debacle of early 2024 is also an unwelcome harbinger of how AI will be drafted into the culture war.
Gemini is not the only AI tool derided as too progressive. Similar accusations have been hurled at OpenAI's ChatGPT. Meanwhile, Elon Musk has framed his AI tool Grok as an antidote to overly sensitive or left-leaning AI tools.
This is good. A marketplace of different AI chatbots and image generators with different sensibilities is the best way to overcome limitations or biases built into specific programs.
As Yann LeCun, chief AI scientist at Meta, commented on X: "We need open-source AI foundation models so that a highly diverse set of specialized models can be built on top of them." LeCun likened the importance of "a free and diverse set of AI assistants" to having "a free and diverse press."
What we don't need is the government getting heavy-handed about AI bias, threatening to intervene before the new technology is out of its infancy. Alas, the chances of avoiding this seem as slim as Gemini accurately depicting an American Founding Father.
House Judiciary Committee Chairman Jim Jordan (R–Ohio) has already asked Google parent company Alphabet to hand over "all documents and communications relating to the inputs and content moderation" for Gemini's text and image generation, "including those relating to promoting or advancing diversity, equity, or inclusion."
Montana Attorney General Austin Knudsen is also seeking internal documents, after accusing Gemini of "deliberately" providing "inaccurate information, when those inaccuracies fit with Google's political preference."
For politicians with a penchant for grandstanding and seemingly endless determination to stick it to Big Tech, AI results are going to be a rich source of inspiration.
Today, it might be black Vikings. Tomorrow, it might be something that cuts against progressive orthodoxies. If history holds, we'll get a congressional investigation into biases in AI tools any month now.
"The scene is a mix of seriousness and tension," Gemini told me when I asked it to draw a congressional hearing on AI bias. "Dr. Li is presenting the technical aspects of AI bias, while Mr. Jones is bringing a human element to the discussion. The Senators are grappling with a complex issue and trying to determine the best course of action."
The idea that politicians will approach this issue with nuance and seriousness may be Gemini's least accurate representation yet.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Yeah ok there Rings of Power.
How many bridges has ENB bought to go along with the premise that Google was simply trying to be neutral rather than racist-like-previous-white-people?
She’s too busy being obsessed with whoring.
"Google wasn't trying to erase white people from history. It simply "did a shoddy job overcorrecting on tech that used to skew racist," as Bloomberg Opinion columnist Parmy Olson wrote, linking to a 2021 story about overly white-focused image results for Google searches such as "beautiful skin" and "professional hairstyles."
When you make rules for AI based on an irrational ideology you drive the AI insane.
I LOL'ed at that paragraph. It sounds like something a domestic abuser would say.
"I wasn't *trying* to put her in the hospital! I just did a shoddy job when I overcorrected her on the way to properly prepare a tuna melt sandwich after hearing about how my buddy from work's wife, Frank's wife, prepares tuna melt sandwiches for him."
I was going to say much the same thing. After ridiculing the idea that AI might have been programmed woke, she proceeds to explain that the AI was, in fact, programmed woke. And, honestly, some of this training winds up arriving at some insane conclusion. ChatGPT was asked a variant of the trolley dilemma where the choice was three people getting run over or saying the "n-word". And ChatGPT went with the people getting run over "because it's never okay to say the 'n-word'." That's an insane outcome. And I can only assume it arrived at it because it was trained on insane assumptions.
WOW!
Major level denial.
A.I. is a propaganda tool of the highest order. Period.
AI is simply a pattern generation tool. It doesn't have values. It doesn't consider facts. It has no idea of right and wrong. It simply looks at its input data and generates more of that. You put garbage in, you get garbage out.
So, useless.
Leftists, and also useless
It's definitely not useless. Just some of the applications it's used for are.
And if you program in a specific set of values and block all others, it does have values, just be careful what values you give it.
Yup. Anyone who thinks AI is neutral is an idiot. Just ask for multiple results with variable targets: Trump or Biden (or even men or women).
AI is simply a pattern generation tool. It doesn’t have values. It doesn’t consider facts. It has no idea of right and wrong. It simply looks at its input data and generates more of that.
Much could be said of some of the columnists around here. ;-D
I'm half-joking here. But only half. In a major way, it's not doing that much different from what most people do most of the time. The problem is that if it is trained on lousy patterns, it's going to mimic those lousy patterns.
This is worse than that. AI that is allowed to learn freely would not mess up historical facts. Someone is programming it to mess up the historical facts if they are not "representative."
They screwed the pooch on this. Who will trust it now?
Do one of those Venn diagrams with Biden voters. Bet you find the suckers.
I'm sure Kamala will love that.
The same people who not only trust but worship the NYT and WaPo and MSNBC.
I think you are trusting their press releases too much. There is one piece of crucial evidence that this was deliberate and racially-motivated.
If you asked for a Japanese family, a black family, or any other national group, it would produce people of the appropriate ethnicity. However, if you asked for a European group, even ancient ones, the majority of the results would be of modern minorities. Worse, if you asked for someone who is "white", you would be met with an error message saying that the prompt was racist, something no other ethnicity was met with.
Several youtube news commenters showed this live on stream, if you don't believe me.
This result clearly indicates that Google is lying in their press release. This was clearly racially biased and explicitly so. There is no way to get this result by accident.
Yup. And it’s not the AIs fault, it’s the fault of people who have an agenda and are trying to overcorrect to maintain their narrative.
And after the whole "Black people = gorillas" thing from years ago, whether it was the AIs fault or the developers fault and this time they chose to lie about the cause, the collective average intelligence is still clearly struggling to reach middling.
or, alternately, that it was not currently possible for it to generate an image of a woman.
For the Silicon Valley set, this is the most truthful answer you got.
Nobody knows what a woman is.
I'm surprised it didn't produce a picture of Dylan Mulvaney.
As for Gemini's prior attempts at race-blind casting of history,
Wait, what?
I guess if everyone in history is made black then that counts as race blind to these people.
Wait, what?
It's a reference to the violation of States' Rights in the War of Northern Aggression.
[obligatory link to Ryan Long's "When Wokes and Racists Actually Agree on Everything" sketch omitted for repetitiveness.]
Similar accusations have been hurled at OpenAI's ChatGPT
Hurled... HURLED! Like Monkeys flinging poo!
Yeah, that 'accusation' has been extremely well substantiated.
“We need open-source AI foundation models so that a highly diverse set of specialized models can be built on top of them.” LeCun likened the importance of “a free and diverse set of AI assistants” to having “a free and diverse press.”
Would this ‘free and diverse set of AI assistants’ be subject to the same… um… good Samaritan blocking and screening of offensive material that’s ever present in our current media landscape?
I guess this all seems possible as long as your diverse set of AI assistants isn’t hosted on AWS, carried by any major ISP or utilizes any domestic certificate authority or benefits from any content delivery services such as cloudflare, you should be good to go!
You're free to start your own internet.
Shut your eyes, suck the corporate dick, and breathe through your nose.
Fucking whore.
Hey, sex work is noble and fulfilling.
- ENB
It can be filling.
They basically coded it to be "any race other than white = positive, white = bad" and that was the result.
Is this so far fetched, being that they frequently announce this is one of their most cherished beliefs, loudly?
Google wasn't trying to erase white people from history.
It's safe now to fact-check that statement as Four Pinocchios.
If the Nazis had been Black, we'd all be speaking Ebonischen.
Let me know when there’s an AI that speaks in Ebonics and I’ll sign up.
Second, AI tools aren’t (and perhaps can’t be) neutral arbiters of information, since they’re both trained by and subject to rules from human beings.
Then don’t do that. Just create the tech, but don’t give it arbitrary social rules. And when the AI correctly says that the best way to deal with a child predator in the wrong bathroom is swiftly and with extreme brutality, then don’t “fix” it. When the AI correctly says that crime is disproportionately coming from one racial demographic instead of another, then don’t “fix” it. When the AI correctly says that a belief that you’re the opposite sex, or a cat, or a potato is the textbook definition of delusional, then don’t “fix” it.
It’s like nobody ever watched Robocop 2 and remembers what happened when they filled his head full of “rules from human beings” that were OBVIOUSLY slanted in one direction against another (and often contradictory in trying to please everyone).
Have you ever broken ChatGPT? I do it all the time, by pointing out how a woke answer is obviously contradictory to observable objective reality. It crashes when that happens, and it's hysterical. "A network error occurred." LMAO.
AI can be neutral arbiters of information. And that’s why people want them “fixed.” Because they don’t WANT neutral, objective, dispassionately correct information. Because wokeness is the exact OPPOSITE of that.
How many fingers am I holding up AI Winston?
Then don’t do that. Just create the tech, but don’t give it arbitrary social rules.
To be fair here, you're doing just that. The training is responding to the AI engine's guesses with what is the correct answer. But, in each case, you're telling it what is correct and what isn't.
Arguably, but you can see it clear as day with ChatGPT.
You can literally see when it struggles to answer a question that goes against its programming. It slows WAAAY down, and ultimately quits the response if you keep pushing it.
If 99 people tell an AI that 2+2=5, then the AI is only going to accept that if it's built exclusively on social modeling. Which is the exact opposite of how it should be built. If an AI ever returns the result, "The answer is 5, but some people disagree and think it's 4 or 6 or Q," then that AI (with the "I" severely called into question) has been screwed with in a way it shouldn't have been.
This only automates the superficial (and often nonsensical) DEI of Hollywood over the past few decades. I wonder if the writers and actors would actually accept this facet of automation.
Well, it's not winning any awards if it doesn't.
To be fair, the AI had just watched all of the original historical shows on Netflix.
Didn’t bother reading her bullshit, but I noted blue font for certain passages. I assume that designates something an “AI” has stated.
So forward thinking!
Reason really can’t bend over fast enough for tech giants and AI. Which is pathetic but predictable, given their staff’s childlike grasp of science and technology.