top of page
  • Khushboo Pareek

Google apologises for lapses in Gemini's depiction of whites



Faced with criticism over representations of white figures like US Founding Fathers and Nazi-era soldiers as people of colour, Google has issued an apology for what it termed as "inaccuracies in some historical image generation depictions" on its Gemini AI tool.


The tech giant acknowledged shortcomings in its attempts to produce a diverse array of results. The lapse has sparked concerns of overcorrection in addressing long-standing racial bias issues within AI technology.


Google posted a statement on X on Wednesday, "We’re aware that Gemini is offering inaccuracies in some historical image generation depictions.

We’re working to improve these kinds of depictions immediately.


Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here," it added.


In comments, a user posted: "Some?!? Your racism didn't fly. Elon's AI will be my choice instead."


Earlier this month, Google introduced image generation capabilities through its Gemini AI platform (formerly Bard), joining competitors like OpenAI in providing similar offerings.


One user commented on Google’s statement as, "I don't understand how such a highly-anticipated AI product could be rolled out with such comical flaws."


Another user criticised google saying, "It's embarrassingly hard to get Google Gemini to acknowledge that white people exist."


Google did not identify specific images it deemed as errors.


Image generators rely on extensive collections of images and captions to generate the most suitable output for a given prompt, often inadvertently reinforcing stereotypes by uniformly depicting individuals as people of colour.


For now, Gemini appears to be simply refusing some image generation tasks.

 

Comments


bottom of page