Google’s CEO acknowledges “completely unacceptable” AI problems

While there’s no official justification, the suspicion is that in an attempt to always present images showing people of diverse races, Google has not accurately gauged the effect in specific contexts where these responses can seem absurd.

Google’s new artificial intelligence, Gemini, is having a rocky start. Just two weeks after its official introduction, Google has had to pause some of its functions after offensive and hilarious results generated by the AI were shared on various social media platforms. These results came about when the tool was asked to create images based on a text description.

In some instances, the tool refused to depict white people in historical contexts that required it or inserted people of Asian or Black descent when asked to show images of Vikings, Nazis, or English medieval knights. The issue is not exclusive to images. The generative text engine has also been under scrutiny for its supposed biases and prejudices in the types of responses it offers.

In a statement, Sundar Pichai, CEO of Google, has finally acknowledged the problem. “I know that some of its responses have offended our users and displayed biases; to be clear, that is completely unacceptable, and we have made a mistake,” explains Pichai.

While there’s no official justification, the suspicion is that in an attempt to always present images showing people of diverse races, Google has not accurately gauged the effect in specific contexts where these responses can seem absurd.

The posts circulating online, for example, depicted the founding fathers of the U.S. as Black, Asian, or Indian individuals. When requesting an image of the Pope, Gemini could also show the image of a woman in papal attire.

“No artificial intelligence is perfect, especially at this stage of the industry’s development, but we know the bar is high for us, and we will continue to work on the issue for as long as necessary,” adds Pichai in his statement.

The problem Google originally aimed to address is, in any case, real. Current artificial intelligence engines are trained with materials that may present biases and prejudices, and there’s a risk of perpetuating them if measures are not programmed to eliminate them. This issue concerns many of the researchers, developers, and academics involved in creating these new tools.

For Google, however, this case represents a blow to an already tarnished image. After internally developing some of the technologies that have made modern artificial intelligence possible, the prevailing opinion in the industry is that the company is not innovating at the right pace, and its rivals, especially OpenAI, have managed to create much more advanced products.