Google's AI wrongly stated that Obama was a Muslim; it now disables certain results.
Google rolled out a new AI-powered search results summary tool earlier this month, designed to condense search results so users don't have to click through numerous links to find answers. However, this feature faced criticism this week after providing incorrect or misleading information to some users' queries.
For instance, a couple of users shared on platform X that Google's AI summary stated former US President Barack Obama is a Muslim, which is a prevalent misconception. In reality, Obama is a Christian. Another user shared that Google AI summary claimed "none of Africa's 54 recognized countries start with the letter 'K'"—ignoring Kenya.
Google confirmed to CNN on Friday that the AI overviews for both queries were removed due to violating the company's policies.
"The vast majority of AI Overviews are of high quality, with links to gain further insights on the web," stated Google spokesperson Colette Garcia in a press release. She mentioned that some erroneous examples of Google AI snafus seem to be manipulated images. "We conducted significant testing before launching this new experience, and as with other products we've launched in Search, we value the feedback. We're taking quick action when necessary according to our content policies."
The end of each Google AI search overview admits that the AI-generated summaries are experimental. Google claims it performs tests to simulate potential bad actors' activities to avoid false or underwhelming results in AI summaries.
Google's search summaries are part of the company's broader objective to integrate its Gemini AI technology into all of its products as it endeavors to compete with AI rivals like OpenAI and Meta. The latest mishap demonstrates the danger of adding AI, as it can often state wrong information with certainty.
Even on less crucial queries, Google's AI overview seems to sometimes provide inaccurate or puzzling information.
For instance, when CNN inquired about "how much sodium is in pickle juice," the AI overview proclaimed an 8 fluid ounce-serving of pickle juice harbors 342 milligrams of sodium, while a smaller serving (3 fluid ounces) contained almost double that amount (690 milligrams). (Best Maid pickle juice, available at Walmart, reveals 250 milligrams of sodium in just 1 ounce.)
CNN also searched: "data used for training google ai." The AI overview admitted that it's unclear if Google prevents copyrighted materials from being included in the online data scraped to train its AI models, indirectly addressing a significant concern about how AI companies function.
This isn't the first time Google has had to scale back the abilities of its AI tools due to an awkward slip-up. In February, the company halted the capacity of its AI photo generator to create images of people due to criticism over its production of historically inaccurate images, predominantly featuring people of color instead of White people.
Google's Search Labs page lets users in areas where AI search overviews have become available to switch the feature on and off.
Read also:
- Telefónica targets market launch for hologram telephony
- vzbv: Internet companies continue to cheat despite ban
- Telefónica targets market launch for hologram telephony in 2026
- AI and climate in schools: how to keep lessons up to date
In light of these issues, Google is now being more cautious with its AI-generated information in the tech business. The tech giant is revising its policies to ensure that AI summaries do not provide misleading or incorrect details.
Moreover, as Google continues to integrate its Gemini AI technology into its various products, it recognizes the importance of maintaining credibility and accuracy in the business world.
Source: edition.cnn.com