Google’s newly launched AI Overview feature, which goals to offer users with AI-generated summaries of search results, has been criticized for delivering misleading, inaccurate, and sometimes downright bizarre answers.
The feature, now rolling out to billions after Google doubled down on it on the recent Google I/O developer conference, has change into the topic of widespread mockery and concern on social media as users exposed examples of the AI’s blunders.
It was only a matter of time. Human curiosity gets the higher of AI guardrails a technique or one other.
Journalists and on a regular basis users alike have taken to X and other platforms to spotlight instances where the AI Overview feature has cited dubious sources, comparable to satirical articles from The Onion or joke posts on Reddit, as in the event that they were factual.
In one in every of the more alarming cases, computer scientist Melanie Mitchell demonstrated an example of the feature displaying a conspiracy theory suggesting that former President Barack Obama is Muslim, apparently in consequence of the AI misinterpreting information from an Oxford University Press research platform.
Other examples of the AI’s errors include plagiarizing text from blogs without removing personal references to the authors’ children, failing to acknowledge the existence of African countries that start with the letter “K,” and even suggesting that pythons are mammals.
Some of those inaccurate results, comparable to the Obama conspiracy theory or the suggestion to place glue on pizza, now not display an AI summary and as a substitute show articles referencing the AI’s factual woes.
However, people at the moment are wondering whether AI Overview can ever serve its purpose appropriately.
Google has already acknowledged the difficulty, with an organization spokesperson telling The Verge that the mistakes appeared on “generally very unusual queries and aren’t representative of most individuals’s experiences.”
However, the precise explanation for the issue stays unclear. It could possibly be attributable to the AI’s tendency to “hallucinate.”
Or, it could stem from the sources Google uses to generate summaries, comparable to satirical articles or troll posts on social media.
In an interview with The Verge, Google CEO Sundar Pichai addressed the difficulty of AI hallucinations, acknowledging that they’re an “unsolved problem” but stopping wanting providing a timeline for an answer.
This isn’t the primary time Google has faced criticism over its AI products; earlier this yr, the corporate’s Gemini AI, a competitor to OpenAI’s ChatGPT and DALL-E, got here under fire for generating historically inaccurate images, including racially diverse Nazi officers, white women presidents, and a female pope.
In response, Google later publicly apologized and temporarily suspended Gemini’s ability to generate images of individuals.
AI Overview has also been criticized by website owners and within the marketing community because it threatens to shift users from interacting with traditional search engine results to easily counting on AI-generated snippets.