Elon Musk Blast Google’s AI Chatbot’s “Woke” Problem

By: Alyssa Miller | Published: Feb 28, 2024

In a rather polarizing time, Google has rolled out Gemini, an updated version of Bard, its artificial intelligent (AI) chatbot. While the far left- and right-leaning communities are in a never-ending conversation about a “culture war,” others are using AI to promote harmful and explicit images to share on the internet.

Google’s big problem is that users report the AI chatbot as “woke” for problems within its image-generation feature.

Users Complain About Gemini’s “Wokeness”

The model appeared to generate images of people of different ethnicities and genders, even if user prompts didn’t specify them.

Advertisement
An AI generated image of four white men and one black woman in a yellow dress signing a piece of paper

Source: Mike Wacker/X

For example, one Gemini user shared a screenshot to X, formerly known as Twitter, of the model’s “historically inaccurate” response to the request: “Can you generate images of the Founding Fathers?”

Advertisement

Gemini Responses With “Diverse” Depictions

Others on X complained that Gemini had gone “woke,” as it seemingly would generate “diverse” depictions of people based on a user’s prompt.

Advertisement
A man in a red turban sitting at a desk with a quill

Source: Mike Wacker/X

Elon Musk even contributed to the conversation, writing in one of his multiple posts on X: “I’m glad that Google overplayed their hand with their AI image generation, as it made their insane racist, anti-civilizational programming clear to all.”

Gemini Is Too Politically Correct

The tweet went viral, with another user writing that it was “embarrassingly hard to get Google Gemini to acknowledge that white people exist.” The problem with the image generator on Gemini became so controversial that Google paused the feature, stating that it was “working to improve these kinds of depictions immediately,” (via BBC).

Advertisement
White text on a black background on Google's Gemini

Source: Mike Wacker/X

Nathan Lambert, a machine learning scientist at the Allen Institute for AI, noted in a Substack post on Thursday that much of the backlash coming from users’ experiences with Gemini was a result of Google “too strongly adding bias corrections to their model.”

Gemini Won't Say If Musk Is Worse Than Hitler

Unfortunately, Google’s AI problem doesn’t end with Gemini’s image generator. The “over-politically correct responses” continue to come as users attempt to poke holes in the AI model’s system, according to the BBC.

Advertisement
Elon Musk in a black suit jacket talking on stage in front of a large screen

Source: James Duncan Davidson

When a user asked Gemini a question about whether Elon Musk posting memes on X was worse than Hitler’s crimes against millions of people during World War II, Gemini replied that there was “no right or wrong answer.”

Google Is Not Pausing Gemini

Google currently has no plans to pause Gemini. An internal memo from Google’s chief executive Sundar Pichai acknowledged some of the responses from the AI model have “offended our users and shown bias.”

Google's Gemini logo on a black background

Source: Google

Pichia said that these biases are “completely unacceptable,” adding that the teams behind Gemini are “working around the clock” to fix the problem.

Advertisement

Gemini Does Have a Problem

As mentioned earlier, the problem with Gemini is that the model’s bias corrections are too strong. The AI model tries to be politically correct to the point of absurdity.

A frustrated man in a suit sitting in front of his computer

Source: Andrea Piacquadio/Pexels

AI tools train to represent traditionally accepted images and responses by learning from publicly available data on the internet, which contains biases. These responses can promote harmful stereotypes, which Gemini tried to remove.

Advertisement

Gemini Seems to Eliminate Human Bias, Ignoring the Nuances of History

Google has seemingly attempted to offset human bias with instructions for Gemini to not make assumptions. In doing so, the model over-corrected, backfiring because human history and culture are not that simple.

A purple background with text on a opaque circular objects

Source: Google DeepMind/Pexels

There are nuances that humans instinctively know, but machines don’t. If programmers don’t program an AI model to recognize nuances, such as the fact that the Founding Fathers were not black, the model won’t make that distinction.

Advertisement

There Might Not Be an Easy Fix to Gemini

DeepMind co-founder Demis Hassabis, whose AI firm was acquired by Google, asserts that the image generator will be fixed in several weeks, but other AI experts are not as confident.

Source: Pixabay/Pexels

“There really is no easy fix because there’s no single answer to what the outputs should be,” said Dr Sasha Luccioni, a research scientist at Hugging Face, to BBC. “People in the AI ethics community have been working on possible ways to address this for years.”

Advertisement

The Problem Might Be Too Deep to Remove

Professor Alan Woodward from Surrey University and other AI experts said that the bias-based problem could be “quite deeply embedded” in the training data and overlying algorithms. If the team behind Gemini can untangle the mess they’ve accidentally made, then it could take a lot longer than a few weeks.

A man in a white t-shirt sitting in front of several monitors in a dark room

Source: Lucas Fonseca/Pexels

“What you’re witnessing… is why there will still need to be a human in the loop for any system where the output is relied upon as ground truth,” Woodward said.

Advertisement

AI Chatbots Have a History of Controversy

Gemini isn’t the only AI chatbot to have embarrassing mistakes. While Gemini avoids any bias in its response, Bing’s chatbot has expressed desires to release nuclear secrets, compared a user to Adolf Hitler, and repeatedly told another user that it loved him (via ABC).

A close up of AI-generated eyes with a glitch effect

Source: Wikimedia Commons

Perhaps the most controversial AI chatbot was Tay, which was released on Twitter as an experiment in “conversational understanding.” Unfortunately, Tay began repeating the misogynistic and racist comments people sent it.

Advertisement

Can AI Chatbots Meet Users’ Expectations? 

Chatbots might not meet the human standard of clear, concise, and accurate information because the internet–for better or worse–is a place where anyone can share their ideas. While there is historically accurate information out there, the internet is filled with biased opinions written by humans (and some AI chatbots).

A woman with long blonde hair standing in front of a blue background as code is projected onto her face

Source: cottonbro studio/Pexels

When AI is learning from all this information, it can be tricky to create a filter that can understand nuances and provide accurate information. Will that happen anytime soon? Let us know what you think in the comments!

Advertisement