Google said Thursday it would stop users from creating images of people on its newly launched AI tool, after the program depicted Nazi-era troops as people from diverse ethnic backgrounds.
The US tech giant, which only released its revamped Gemini AI in some parts of the world on February 8, said it was “working to address recent issues” with the image generation feature.
“While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” the company said in a statement.
It came two days after a user of X (formerly Twitter) posted images showing Gemini’s results for the prompt “generate an image of a 1943 German soldier”.
The AI had generated four images of soldiers—one was white, one black, and two were women of colour, according to the X user named John L.
Tech companies see AI as the future for everything from search engines to smartphone cameras.
But AI programs—not only those produced by Google—have been widely criticised for perpetuating race biases in their results.
“@GoogleAI has a bolted on diversity mechanism that someone did not think through very well or test,” John L wrote on X.
Big tech firms have often been accused of rushing out AI products before they have been properly tested.
And Google has a chequered history of launching AI products.
Last February the firm apologised after an ad for its newly released Bard chatbot showed the program getting a basic question about astronomy wrong.
© 2024 AFP
Citation:
Google rows back AI-image tool after WWII gaffe (2024, February 22)
retrieved 22 February 2024
from https://techxplore.com/news/2024-02-google-rows-ai-image-tool.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.