Google still hasn’t fixed Gemini’s biased image generator

Share via:


Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show an anachronistically diverse group of soldiers, while rendering “Zulu warriors” as uniformly Black.

Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google’s AI research division DeepMind, said that a fix should arrive “in very short order” — but we’re now well into May, and the promised fix has yet to appear.

Google touted plenty of other Gemini features at its annual I/O developer conference this week, from custom chatbots to a vacation itinerary planner and integrations with Google Calendar, Keep and YouTube Music. But image generation of people continues to be switched off in Gemini apps on the web and mobile, confirmed a Google spokesperson.

So what’s the holdup? Well, the problem’s likely more complex than Hassabis alluded to.

The data sets used to train image generators like Gemini’s generally contain more images of white people than people of other races and ethnicities, and the images of non-white people in those data sets reinforce negative stereotypes. Google, in an apparent effort to correct for these biases, implemented clumsy hardcoding under the hood to add diversity to queries where a person’s appearance wasn’t specified. And now it’s struggling to suss out some reasonable middle path that avoids repeating history.

Will Google get there? Perhaps. Perhaps not. In any event, the drawn-out affair serves as a reminder that no fix for misbehaving AI is easy — especially when bias is at the root of the misbehavior.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Google still hasn’t fixed Gemini’s biased image generator


Back in February, Google paused its AI-powered chatbot Gemini’s ability to generate images of people after users complained of historical inaccuracies. Told to depict “a Roman legion,” for example, Gemini would show an anachronistically diverse group of soldiers, while rendering “Zulu warriors” as uniformly Black.

Google CEO Sundar Pichai apologized, and Demis Hassabis, the co-founder of Google’s AI research division DeepMind, said that a fix should arrive “in very short order” — but we’re now well into May, and the promised fix has yet to appear.

Google touted plenty of other Gemini features at its annual I/O developer conference this week, from custom chatbots to a vacation itinerary planner and integrations with Google Calendar, Keep and YouTube Music. But image generation of people continues to be switched off in Gemini apps on the web and mobile, confirmed a Google spokesperson.

So what’s the holdup? Well, the problem’s likely more complex than Hassabis alluded to.

The data sets used to train image generators like Gemini’s generally contain more images of white people than people of other races and ethnicities, and the images of non-white people in those data sets reinforce negative stereotypes. Google, in an apparent effort to correct for these biases, implemented clumsy hardcoding under the hood to add diversity to queries where a person’s appearance wasn’t specified. And now it’s struggling to suss out some reasonable middle path that avoids repeating history.

Will Google get there? Perhaps. Perhaps not. In any event, the drawn-out affair serves as a reminder that no fix for misbehaving AI is easy — especially when bias is at the root of the misbehavior.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

UAE-based Web3 banking startup raises $25m series A

The funding was co-led by Web3Port Foundation and...

iOS 18.2.1 coming soon for iPhone users

According to MacRumors, Apple is preparing the release...

PhysicsWallah’s IPO Frenzy 

In August, when we last looked at PhysicsWallah’s...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!