United States

Google Temporarily Halts AI Chatbot's Image Generation Amid Criticism, Here's Why

Google's AI chatbot, Gemini, faces social media backlash and suspends image generati🔯on feature amid controversy and criticism for producing historically inaccurate and culturally insensitive 💙depictions, raising concerns about bias in AI models.

Getting your Trinity Audio player ready...
Google
Google Temporarily Halts AI Chatbot's Image Generation Photo: Google
info_icon

On Thursday, Google announced a temporary h𓂃alt to its Gemini artificial intelligence chatbot's ability to generate images of people, following a day of apologizing for "inaccuracies" in the historical depictions it produced. The search engine faced widespread criticism for producing "diverse" images that were historically or factually inaccurate, such as depicting black Vikꦅings, female popes, and Native Americans among the Founding Fathers, according to the New York Post.

This week, Gemini users shared screenshots on social media showcasing historically white-dominated settings featuring racially diverse characters purportedly generated by the AI, prompting critics to question whether the company is excessively c🍸ompensating for the potential risk of racial bias in its AI model. Some users heavily criticized Gemini as being "absurdly woke" and "unusable" when requests for representative images resulted in oddly revisionist pictures.

“We’re already working to address recent issues with Gemini’s image generation feature,” stated Google in a post on the social media platform X. “While we do this, we’re going to pause the image generation of people and will re-releasဣe an improved version soon.”

Examples quoted by the New Yor꧒k Post included an AI-generated image of a black man resembling George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman dressed in papal attire despite the historical fact that all 266 popes have been white men.

In another startling example reported by The Verge, Gemini generated "diverse" depictions of Nazi-era German soldiers, including an Asian wom💛an and a black man dressed in 1943 military attire.

William A. Jacobson, a Cornell University Law professor and founder of the Equal Protection Project, a watchdog group, expressed concern to The New York Post: "In the n💫ame of anti-bias, actual bias is being built into the systems."

“This is a concern not just for search results, but rea♚l-world applications where ‘bias free’ algorithm testing actually is building 🍎bias into the system by targeting end results that amount to quotas.”

The issue may stem from Google's "training process" for the "large-language model" powering Gemini's image tool, as suggested by Fabio Motoki, a lecturer at the UK's University❀ of East Anglia, who co-authored a paper last year identifying a noticeable left-leanin📖g bias in ChatGPT.

“Remember that reinforcement learning from human feedback (RLHF) is about people telling the🗹 model what is better and what is worse, in practice shap▨ing its ‘reward’ function – technically, its loss function,” Motoki told The Post.

“So, depending on which people Google is recruiting, or🔴 which instructions Google is giving them, it could lead to this problem.”

According to the Washington Pos🌠t, the extent of the is♈sue remains uncertain.

Prior🧸 to Google disabling the image-generation feature on Thursday morning, Gemini responded to prompts from a Washington Post reporter by generating images of White individuals when asked to depict various personas such as a beautiful woman, a handsome man, a social🌺 media influencer, an engineer, a teacher, and a gay couple.

Past research has indicated that AI im💟age generators have the potential to magnify racial and gender stereotypes present in their training data. Furthermore, without proper filters, they tend to depict lighter-skinned men more frequently when tasked with generating images of people across different contexts.

On Wednesday, Google acknowledged "that Gemini is offering 🦹inaccuracies in some historical image generation depictions” and stated thatꦬ it’s “working to improve these kinds of depictions immediately.”

Google also added that while Gemini's capacity to “generate a wide range of people” was “generally a goo💃d thing” due to Google's global user base, "it’s missing the mark here,” as conveyed in a post on X.

Sourojit Ghosh, a researcher at the University of Washi♚ngton specializing in bias in AI image-generators, expressed support for Google's decision to temporarily halt the generation of people's faces. However, he expressed some conflicted sentiments regarding the process that led to this outcome.

Contrary to recent claims circulating on social med⛦ia about "white erasure" and the notion that Gemini refuses to generate faces of white individuals, Ghosh's research has predominantl🔜y shown the opposite.

“The rapidness of this response in the face of a lot of other literature and a lot of other research that has shown traditionally marginalized peo𝄹ple being erased by models lཧike this — I find a little difficult to square,” he said.

Due to the lack of any published parameters 🧜governing the behavior of the Gemini chatbot by Google, 𝄹obtaining a clear explanation for why the software was creating diverse versions of historical figures and events is challenging.

When the AP 🐽requested Gemini to produce images of people or a large crowd, the response indicated ongoing efforts to enhance this capability.

“We expect this feature ♏to ꧂return soon and will notify you in release updates when it does,” the chatbot said.

Ghosh suggested that Google could potentially develop a method to filter responses based on the historical context of a user's prompt. However, addressing the broader issues posed by image-generators constructed fro🍸m extensive collections of photos and artwork available on the internet demands more than just a technical fix.

“You’re not go🌠ing to overnight come up with a text-to-image generator that does not cause representationa꧅l harm,” he said. “They are a reflection of the society in which we live.”

CLOSE