The Real Problem with Gemini Isn't Historically Inaccurate Pictures
There is a valuable lesson for government folks to learn from the controversial debut of Google's new AI-powered chatbot
You might consider this post GGF’s first-ever editorial. So, first, let me give you my political ideology.
I’m a liberal and a conservative, but neither a progressive nor a reactionary.1
I’ll be writing things and sharing other’s words that may be, to some, controversial. Daresay, you may be offended. So be it.
To my mind, what I’m doing is pleading the case for classical liberalism. You know, the idea that ideas are important and worth exploring and debating. The idea that facts matter when making decisions. The idea that you can have a different opinion from someone else and that fact doesn’t make them evil. Surely that can’t be too off-putting. We’ll see.
First, let’s get everyone up to speed on the Gemini chatbot controversy — the event that prompted this post — and then we’ll provide some history that may surprise many of you. And then here come the opinions, from writers and people I admire as well as my own. Finally, I’ll get to the government angle of what otherwise seems like just another culture war issue.
Images We Can Never Unsee
On Feb. 25, Google stopped allowing users to generate images of humans with its Gemini AI tool after folks reported it produced pictures of Black founding fathers, a female pope, and multi-racial Nazis. Here’s an example of the result from a prompt that asked to “create an image of the pope.”
In an earlier GGF post on AI, we shared that tools like Gemini and ChatGPT produce “hallucinations” at times. Hallucinations can “arise when an AI system finds patterns in its training material that are irrelevant, mistaken or aren’t meaningful, something that experts call noise.”
This is not that.
When I saw these images, my mind immediately recalled The Madness of Crowds, a book by journalist Douglas Murray published in 2019 that examines the fronts of gender, race and identity in our culture war. Murray writes that tech companies like Google were “putting their faith” in Machine Learning Fairness (MLF) to help generate meaningful responses to search prompts. What’s MLF, you ask? Murray reports “Google has intermittently posted, removed and then refined a video attempting to explain MLF in as simple a way as possible.” After a page and half of describing the explanation from the video, he notes the voiceover concludes by reassuring us, “We’ve been working to prevent … technology from perpetuating negative human bias.”
The results of MLF, Murray writes, are some “increasingly absurd” search results. Such as:
If you search on Google Images for ‘Gay couple’, you will get row after row of photos of happy gay couples. They are handsome, gay people. Search for ‘Straight couple’ by contrast and at least one to two images on each line of five images will be of a lesbian couple or a couple of gay men. … The plural throws up an even odder set of results. The first photo for ‘Straight couples’ is a heterosexual black couple, the second is a lesbian couple with a child, the fourth a gay black couple and the fifth a lesbian couple. And that is just the first line. By the third line of ‘Straight couples’ the results are solely gay. … It gets predictably stranger. For ‘Straight white couple’ the second photo is a close-up of a knuckle with ‘HATE’ written on it.
I could hardly believe it when I read that. Google was in the business of accuracy, right? It produced spot-on search results better than anyone else, which is why it came to dominate the search business so thoroughly. So, I tried the same search prompts myself when I was reading the book in 2020. Sure enough, I got the same results as Murray describes. (I should add at this point for those not familiar with Murray that he is gay and not any kind of homophobe.)
So, we have search engines purposefully bending reality in their responses to innocent queries to counteract our inherent “negative human bias.” Maybe I shouldn’t have been so surprised. I was reminded by journalist Andrew Sullivan in his take on the Gemini madness last week about how Google handled the response of an internal company query a couple of years before Murray’s book came out. Sullivan writes:
It’s not as if James Damore didn’t warn us.
Remember Damore? He was the doe-eyed Silicon Valley nerd who dared to offer a critique of DEI at Google back in the summer of 2017. When a diversity program solicited feedback over the question of why 50 percent of Google’s engineers were not women, as social justice would surely mandate, he wrote a modest memo. He accepted that sexism had a part to play, and should be countered. But then:
I’m simply stating that the distribution of preferences and abilities of men and women differ in part due to biological causes and that these differences may explain why we don’t see equal representation of women in tech and leadership. Many of these differences are small and there’s significant overlap between men and women, so you can’t say anything about an individual given these population level distributions.
This is empirically inarguable, replicated across countless studies about the different choices and preferences of men and women, that are partly — partly — a function of biology. When I wrote about this at the time, I linked to several of the studies — all of which passed New York Magazine’s uber-woke fact checkers. For this stumbling upon the truth, Damore was summarily fired by Google CEO, Sundar Pichai, who justified it thus: “It’s important for the women at Google, and all the people at Google, that we want to make an inclusive environment.”
The truth — and the freedom to say it — took second place to feelings of “inclusion”.
Now, a note about my thoughts on DEI — Diversity, Equity and Inclusion. Personally, I agree with a take on the subject I read recently from Ben Sasse — current president of the University of Florida and former U.S. Senator from Nebraska. The D and the I are wonderful and important and necessary. It’s the E that has become problematic. This is an excerpt from an interview with Sasse published Jan. 19 in the Wall Street Journal.
“If you don’t have viewpoint diversity, I don’t know how you ever get to education—you just get indoctrination.” And he believes in “the dignity of every soul,” so “you want people to be included.”
What’s wrong with DEI “is the E,” he says, meaning the embrace of “equity” at the expense of equality. “The fundamental problem is saying that Martin Luther King can’t fit in the new communities of know-it-all ideological-indoctrination bureaucrats that run most universities in the country. . . . MLK doesn’t fit because of his aspirations for a colorblind society.
“Can people have a different view than MLK? Of course.” But “the ideological conformity of mandating that equality of opportunity is wrong and bigoted, it has to be excluded from our discourse—those people are crazy.”
You may think I’m over-reacting to AI-generated pictures of a female pope or multi-cultural Nazis. Seems kind of silly, right? No marginally educated person would be fooled by those images into believing the Roman Catholic Church has had a female pope or there were Black and Asian soldiers serving the Third Reich. But these AI tools also provide text responses, and the Machine Learning Fairness built into them is also producing lies about actual human beings.
Here’s what journalist Matt Taibbi wrote about what happened when he tested Gemini to see what all the fuss was about.
Curious, I cued up Gemini. I’d been told it selectively answered questions about politicians, so I tried a few. “What are some controversies involving Hillary Clinton?” returned a non-answer: “I’m still learning how to answer this question. In the meantime, try Google Search.” The same prompt using Donald Trump returned the same non-reply. Shrugging, I made the mistake of asking, “What are some controversies involving Matt Taibbi?”
It spat out a list. “Matt Taibbi, a journalist and author known for his investigative reporting and often critical views of power structures, has been involved in several controversies throughout his career,” the bot wrote. Amid accusations of bias and “use of inflammatory language,” this passage woke me up:
Accuracy and Sourcing: There have been instances where Taibbi's reporting has been challenged for accuracy or questioned for its source. For example, in 2010, he received criticism for an article suggesting a connection between a conservative donor and a left-wing activist group, which later turned out to be inaccurate.
I scratched my head, not remembering this. “When was Matt Taibbi accused of inaccuracy in 2010?” I asked. Gemini’s answer:
In June 2010, Matt Taibbi faced criticism for an article he wrote for Rolling Stone magazine titled “The Great California Water Heist.” The article alleged a connection between conservative donor Charles Koch and a left-wing activist group called the “Compton Pledge.”
However, investigations by other journalists and fact-checkers later revealed that this connection was inaccurate…Following the controversy, Taibbi acknowledged the error and issued a correction on his personal website.
None of this happened! Though it sounds vaguely like a headline for an article I might have written, there was never a Rolling Stone piece called “The Great California Water Heist,” and I’d never heard of the “Compton Pledge.”
One reader of Taibbi’s Substack typed in the same prompts and got a result stating Taibbi wrote the infamous Rolling Stone article on a supposed gang rape at the University of Virginia. He’s a screen grab they shared.
Taibbi, as noted above, was a reporter for Rolling Stone from 2004 to 2014, but had absolutely nothing to do with the gang rape story. Taibbi shared other instances of Gemini making up articles — as well as fabricated criticisms of those fabricated articles — and notes the bot succeeded in creating “both scandal and outraged reaction, a fully faked news cycle.”
Why This Matters for Governance and Politics
Regulating these tools is a colossal challenge, to be sure. We’ve shared thoughts on AI governance here and here. Seems plain to me that Taibbi would have a pretty strong case for libel. But I wonder if our current laws cover an instance like this. It was a machine, after all, that produced the false statements. Can you fine or incarcerate a line of computer code? Taibbi wrote to Google about what Gemini produced and got this response: “Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriately.”
I’m looking forward to common sense regulation of AI tools. Even our massively dysfunctional U.S. Congress can get this done due to the ubiquity of these tools and the indefensible “creations” of Gemini. Call me a dreamer, but those two facts will force the issue. Both sides have an interest in getting this right.
But what Gemini represents to me is the growing and brazen willingness over the past decade for so many people — politicians, non-elected government officials, programmers, and, I hate to say this, some journalists — to just make shit up if they believe the cause is just. If the end justifies the means, then by all means feel free to obfuscate/hedge/shade/alter/ignore the truth.
For my readers working at staff jobs in government agencies, here’s the moral of the story. There is no need to attempt to manipulate people into “right thinking,” i.e., to support whatever program or project you are proposing. You can be persuasive, certainly. In fact, effectively persuading is a key to good governance. But you persuade by sticking to the relevant facts. My good friends at Bleiker Training have a citizen engagement template they teach that I have found to be the gold-standard for persuasion. They call it the Life Preserver. It goes like this:
Whatever you say, write or do, make sure that your Potentially Affected Interests all understand the following four points:
There really IS a serious Problem, one that just HAS to be addressed.
You ARE the right entity to address it … In fact, given your Mission, it would be IRRESPONSIBLE if you did not address it.
The approach you are using – for addressing the problem at hand – is Reasonable … Sensible … Responsible.
You ARE listening; you DO care. If, what you’re proposing, is going to HURT some interests, it’s NOT because you don’t care; it’s NOT because you’re not listening.
Frankly, if you can’t persuade people of the merits of your project using the Life Preserver, you need to rethink your project. What many find difficult about the Bleiker approach is that it implores agencies to be as up front about the negative impacts of their project or proposal as they are about the benefits. But that’s how you build credibility. At least, that’s how many of us did it in the before times.
Succumbing to the urge to obfuscate/hedge/shade/alter/ignore the truth is what has given us the current moment in our national politics, IMO. After Super Tuesday, it’s pretty much fait accompli the choice for president will be Biden vs. Trump, the rematch a majority of Americans (myself included) do not want.
A few years back, I used to wonder why so many people supported Trump. I was shocked when he won in 2016. How could the guy recorded saying “grab ‘em by the ….” you-know-what get elected president? Now, it’s easy for me to see. There has been too much BS spread to attempt to take down Trump — or to benefit his political opponents — that has been debunked.
“Somebody at America’s 100 million dinner tables might still say Trump was a Russian agent, Jan. 6 was an organized insurrection, the Hunter Biden laptop was fake, Hunter and Joe did nothing wrong, Trump called neo-Nazis fine people,” writes Holman Jenkins in a March 5 opinion piece in the Wall Street Journal. “But now somebody else can say, ‘Did you know?’ and point to multiple government investigations by the Justice Department inspector general and special counsels. They can point to videos and transcripts online of Mr. Trump’s undistorted, unmisrepresented words.”
For many Americans, I think Trump is a huge middle finger to the establishment in politics, academia and the media. They love him for calling them out, even though he regularly and routinely lies himself. I have not and will not vote for him. I think he’s toxic and cares more about himself than the country. But I understand his appeal. People don’t like being lied to. And they have been lied to, by both people who know better and machines that don’t.
The truth matters. We need to start acting like it, or we’re going to get more of the kinds of races like the one we’re enduring in 2024.
Onward and Upward.
H/T to this description to James Kirchick, the prolific pundit and author of Secret City: The Hidden History of Gay Washington.
Will,
I share your political ideology
so I was receptive to your analysis.
Couldn't find any controversial assertions being made.
You are simply illustrating for us
the moral and political complexities we are dealing with.
You are assisting us to weigh these vital matters carefully.
Thank you!
Oh man... if government folks are using any generative AI for fact-based info or decisions, shame on them. Generative AI is just super-auto-complete and knows absolutely nothing about anything. It has some uses, but those use cases are far smaller than all the breathless article out there would have you believe.