Tuesday, June 3, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

‘Embarrassing and wrong’: Google admits it lost control of image-generating AI

Simon Osuji by Simon Osuji
February 24, 2024
in Creator Economy
0
‘Embarrassing and wrong’: Google admits it lost control of image-generating AI
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter

Google has apologized (or come very close to apologizing) for another embarrassing AI blunder this week, an image-generating model that injected diversity into pictures with a farcical disregard for historical context. While the underlying issue is perfectly understandable, Google blames the model for “becoming” oversensitive. But the model didn’t make itself, guys.

The AI system in question is Gemini, the company’s flagship conversational AI platform, which when asked calls out to a version of the Imagen 2 model to create images on demand.

Recently, however, people found that asking it to generate imagery of certain historical circumstances or people produced laughable results. For instance, the Founding Fathers, who we know to be white slave owners, were rendered as a multi-cultural group, including people of color.

This embarrassing and easily replicated issue was quickly lampooned by commentators online. It was also, predictably, roped into the ongoing debate about diversity, equity, and inclusion (currently at a reputational local minimum), and seized by pundits as evidence of the woke mind virus further penetrating the already liberal tech sector.

Image Credits: An image generated by Twitter user Patrick Ganley.

It’s DEI gone mad, shouted conspicuously concerned citizens. This is Biden’s America! Google is an “ideological echo chamber,” a stalking horse for the left! (The left, it must be said, was also suitably perturbed by this weird phenomenon.)

Related posts

Dave’s Hot Chicken Acquired for $1B By Roark Capital

Dave’s Hot Chicken Acquired for $1B By Roark Capital

June 3, 2025
Reddit now lets you hide content, like posts and comments, from your user profile

Reddit now lets you hide content, like posts and comments, from your user profile

June 3, 2025

But as anyone with any familiarity with the tech could tell you, and as Google explains in its rather abject little apology-adjacent post today, this problem was the result of a quite reasonable workaround for systemic bias in training data.

Say you want to use Gemini to create a marketing campaign, and you ask it to generate 10 pictures of “a person walking a dog in a park.” Because you don’t specify the type of person, dog, or park, it’s dealer’s choice — the generative model will put out what it is most familiar with. And in many cases, that is a product not of reality, but of the training data, which can have all kinds of biases baked in.

What kinds of people, and for that matter dogs and parks, are most common in the thousands of relevant images the model has ingested? The fact is that white people are over-represented in a lot of these image collections (stock imagery, rights-free photography, etc.), and as a result the model will default to white people in a lot of cases if you don’t specify.

That’s just an artifact of the training data, but as Google points out, “because our users come from all over the world, we want it to work well for everyone. If you ask for a picture of football players, or someone walking a dog, you may want to receive a range of people. You probably don’t just want to only receive images of people of just one type of ethnicity (or any other characteristic).”

Illustration of a group of people recently laid off and holding boxes.

Imagine asking for an image like this — what if it was all one type of person? Bad outcome! Image Credits: Getty Images / victorikart

Nothing wrong with getting a picture of a white guy walking a golden retriever in a suburban park. But if you ask for 10, and they’re all white guys walking goldens in suburban parks? And you live in Morocco, where the people, dogs, and parks all look different? That’s simply not a desirable outcome. If someone doesn’t specify a characteristic, the model should opt for variety, not homogeneity, despite how its training data might bias it.

This is a common problem across all kinds of generative media. And there’s no simple solution. But in cases that are especially common, sensitive, or both, companies like Google, OpenAI, Anthropic, and so on invisibly include extra instructions for the model.

I can’t stress enough how commonplace this kind of implicit instruction is. The entire LLM ecosystem is built on implicit instructions — system prompts, as they are sometimes called, where things like “be concise,” “don’t swear,” and other guidelines are given to the model before every conversation. When you ask for a joke, you don’t get a racist joke — because despite the model having ingested thousands of them, it has also been trained, like most of us, not to tell those. This isn’t a secret agenda (though it could do with more transparency), it’s infrastructure.

Where Google’s model went wrong was that it failed to have implicit instructions for situations where historical context was important. So while a prompt like “a person walking a dog in a park” is improved by the silent addition of “the person is of a random gender and ethnicity” or whatever they put, “the U.S. Founding Fathers signing the Constitution” is definitely not improved by the same.

As the Google SVP Prabhakar Raghavan put it:

First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.

These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong.

I know how hard it is to say “sorry” sometimes, so I forgive Raghavan for stopping just short of it. More important is some interesting language in there: “The model became way more cautious than we intended.”

Now, how would a model “become” anything? It’s software. Someone — Google engineers in their thousands — built it, tested it, iterated on it. Someone wrote the implicit instructions that improved some answers and caused others to fail hilariously. When this one failed, if someone could have inspected the full prompt, they likely would have found the thing Google’s team did wrong.

Google blames the model for “becoming” something it wasn’t “intended” to be. But they made the model! It’s like they broke a glass, and rather than saying “we dropped it,” they say “it fell.” (I’ve done this.)

Mistakes by these models are inevitable, certainly. They hallucinate, they reflect biases, they behave in unexpected ways. But the responsibility for those mistakes does not belong to the models — it belongs to the people who made them. Today that’s Google. Tomorrow it’ll be OpenAI. The next day, and probably for a few months straight, it’ll be X.AI.

These companies have a strong interest in convincing you that AI is making its own mistakes. Don’t let them.

Source link

Previous Post

Saudi Fund for Development Signs a New Development Loan Agreement to Support the Transport Sector in Tunisia, and Inaugurates 270 Housing Units

Next Post

Odysseus Marks the First US Moon Landing in More Than 50 Years

Next Post
Odysseus Marks the First US Moon Landing in More Than 50 Years

Odysseus Marks the First US Moon Landing in More Than 50 Years

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Sports Might Replace Oil For Saudi Arabia

Sports Might Replace Oil For Saudi Arabia

2 years ago
Costa Rica Is Saving Forest Ecosystems by Listening to Them

Costa Rica Is Saving Forest Ecosystems by Listening to Them

3 months ago
American lady claims she’s expecting Davido’s child, shares pregnancy test result

American lady claims she’s expecting Davido’s child, shares pregnancy test result

2 years ago
How to Buy Crypto with PostePay?

How to Buy Crypto with PostePay?

7 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Matthew Slater, son of Jackson State great, happy to see HBCUs back at the forefront

    0 shares
    Share 0 Tweet 0
  • Dolly Varden Focuses on Adding Ounces the Remainder of 2023

    0 shares
    Share 0 Tweet 0
  • US Dollar Might Fall To 96-97 Range in March 2024

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.