‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

Share via:


Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” over-sensitive. The model didn’t make itself, guys.

The AI system in question is Gemini, the company’s flagship conversational AI platform, which when asked calls out to a version of the Imagen 2 model to create images on demand.

Recently, however, people found that asking it to generate imagery of certain historical circumstances or people produced laughable results. For instance the founding fathers, whom we know to be white slave owners, were rendered as a multi-cultural group including people of color.

This embarrassing and easily replicated issue was quickly lampooned by commentators online. It was also, predictably, roped into the ongoing debate about diversity, equity, and inclusion (currently at a reputational local minimum), and seized by pundits as evidence of the woke mind virus further penetrating the already liberal tech sector.

An image generated by Twitter user Patrick Ganley.

It’s DEI gone mad, shouted conspicuously concerned citizens. This is Biden’s America! Google is an “ideological echo chamber,” a stalking horse for the left! (The left, it must be said, was also suitably perturbed by this weird phenomenon.)

But as anyone with any familiarity with the tech could tell you, and as Google explains in its rather abject little apology-adjacent post today, this problem was the result of a quite reasonable workaround for systemic bias in training data.

Say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 pictures of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s dealer’s choice — the generative model will put out what it is most familiar with. And in many cases, that is a product not of reality, but of the training data, which can have all kinds of biases baked in.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of relevant images the model has ingested? The fact is that white people are over-represented in a lot of these image collections (stock imagery, rights-free photography, etc), and as a result the model will default to white people in a lot of cases if you don’t specify.

That’s just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Illustration of a group of people recently laid off and holding boxes.

Imagine asking for an image like this – what if it was all one type of person? Bad outcome!

Nothing wrong with getting a picture of a white guy walking a golden retriever in a suburban park. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? And you live in Morocco, where the people, dogs, and parks all look different? That’s simply not a desirable outcome. If someone doesn’t specify a characteristic, the model should opt for variety, not homogeneity, despite how its training data might bias it.

This is a common problem across all kinds of generative media. And there’s no simple solution. But in cases that are especially common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I can’t stress enough how commonplace this kind of implicit instruction is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they are sometimes called, where things like “be concise,” “don’t swear,” and other guidelines are given to the model before every conversation. When you ask for a joke, you don’t get a racist joke — because despite the model having ingested thousands of them, it has also been trained, like most of us, not to tell those. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of a random gender and ethnicity” or whatever they put, “the US founding fathers signing the Constitution” is definitely not improved by the same.

As the Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

I know how hard it is to say “sorry” sometimes, so I forgive Prabhakar for stopping just short of it. More important is some interesting language in there: “The model became way more cautious than we intended.”

Now how would a model “become” anything? It’s software. Someone — Google engineers in their thousands — built it, tested it, iterated on it. Someone wrote the implicit instructions that improved some answers and caused others to fail hilariously. When this one failed, if someone could have inspected the full prompt, they likely would have found the thing Google’s team did wrong.

Google blames the model for “becoming” something it wasn’t “intended” to be. But they made the model! It’s like they broke a glass, and rather than saying “we dropped it,” they say “it fell.” (I’ve done this.)

Mistakes by these models are inevitable, certainly. They hallucinate, they reflect biases, they behave in unexpected ways. But the responsibility for those mistakes does not belong to the models, it belongs to the people who made them. Today that’s Google. Tomorrow it’ll be OpenAI. The next day, and probably for a few months straight, it’ll be X.AI.

These companies have a strong interest in convincing you that AI is making its own mistakes. Don’t let them.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI


Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” over-sensitive. The model didn’t make itself, guys.

The AI system in question is Gemini, the company’s flagship conversational AI platform, which when asked calls out to a version of the Imagen 2 model to create images on demand.

Recently, however, people found that asking it to generate imagery of certain historical circumstances or people produced laughable results. For instance the founding fathers, whom we know to be white slave owners, were rendered as a multi-cultural group including people of color.

This embarrassing and easily replicated issue was quickly lampooned by commentators online. It was also, predictably, roped into the ongoing debate about diversity, equity, and inclusion (currently at a reputational local minimum), and seized by pundits as evidence of the woke mind virus further penetrating the already liberal tech sector.

An image generated by Twitter user Patrick Ganley.

It’s DEI gone mad, shouted conspicuously concerned citizens. This is Biden’s America! Google is an “ideological echo chamber,” a stalking horse for the left! (The left, it must be said, was also suitably perturbed by this weird phenomenon.)

But as anyone with any familiarity with the tech could tell you, and as Google explains in its rather abject little apology-adjacent post today, this problem was the result of a quite reasonable workaround for systemic bias in training data.

Say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 pictures of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s dealer’s choice — the generative model will put out what it is most familiar with. And in many cases, that is a product not of reality, but of the training data, which can have all kinds of biases baked in.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of relevant images the model has ingested? The fact is that white people are over-represented in a lot of these image collections (stock imagery, rights-free photography, etc), and as a result the model will default to white people in a lot of cases if you don’t specify.

That’s just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Illustration of a group of people recently laid off and holding boxes.

Imagine asking for an image like this – what if it was all one type of person? Bad outcome!

Nothing wrong with getting a picture of a white guy walking a golden retriever in a suburban park. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? And you live in Morocco, where the people, dogs, and parks all look different? That’s simply not a desirable outcome. If someone doesn’t specify a characteristic, the model should opt for variety, not homogeneity, despite how its training data might bias it.

This is a common problem across all kinds of generative media. And there’s no simple solution. But in cases that are especially common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I can’t stress enough how commonplace this kind of implicit instruction is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they are sometimes called, where things like “be concise,” “don’t swear,” and other guidelines are given to the model before every conversation. When you ask for a joke, you don’t get a racist joke — because despite the model having ingested thousands of them, it has also been trained, like most of us, not to tell those. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of a random gender and ethnicity” or whatever they put, “the US founding fathers signing the Constitution” is definitely not improved by the same.

As the Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

I know how hard it is to say “sorry” sometimes, so I forgive Prabhakar for stopping just short of it. More important is some interesting language in there: “The model became way more cautious than we intended.”

Now how would a model “become” anything? It’s software. Someone — Google engineers in their thousands — built it, tested it, iterated on it. Someone wrote the implicit instructions that improved some answers and caused others to fail hilariously. When this one failed, if someone could have inspected the full prompt, they likely would have found the thing Google’s team did wrong.

Google blames the model for “becoming” something it wasn’t “intended” to be. But they made the model! It’s like they broke a glass, and rather than saying “we dropped it,” they say “it fell.” (I’ve done this.)

Mistakes by these models are inevitable, certainly. They hallucinate, they reflect biases, they behave in unexpected ways. But the responsibility for those mistakes does not belong to the models, it belongs to the people who made them. Today that’s Google. Tomorrow it’ll be OpenAI. The next day, and probably for a few months straight, it’ll be X.AI.

These companies have a strong interest in convincing you that AI is making its own mistakes. Don’t let them.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

New-Age Tech Stocks Slide Amid Volatility In Broader Market

The downward streak of the Indian equities market...

How OpiGo Is Giving Retail Investors A Shield Against...

SUMMARY OpiGo is a tech platform that provides stock...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!