HomeArtificial Intelligence“Legitimately dangerous”: Google’s flawed AI overviews spark ridicule and concern

“Legitimately dangerous”: Google’s flawed AI overviews spark ridicule and concern

Should you add Glue in your pizza, stare directly into the sun Eat half-hour a day rock or a toxic mushroomtreat a snake bite with iceAnd jump from the Golden Gate Bridge?

According to information provided through Google Search's recent AI Overview feature, these obviously silly and harmful suggestions should not just good ideas, however the very first results a user should see when trying to find the corresponding product.

What's occurring here, where is all this misinformation coming from, and why is Google at the highest of its search results pages right away? Let's dive deeper.

What is the Google AI Overview?

In its quest to meet up with competitor OpenAI and its successful chatbot ChatGPT in the big language model (LLM) chatbot and search game, Google introduced a brand new feature called “Search for generative experience” almost a yr ago, in May 2023.

Described on the time as “an AI-powered snapshot of necessary information to think about, with links for more in-depth information,” it essentially appeared as a brand new paragraph of text directly below the Google Search input bar, above the normal list of blue links that the user normally gets when doing a Google search.

The feature is alleged to be based on search-specific AI models. At the time, it was an “opt-in” service and users had to beat quite a few hurdles to activate it.

But 10 days ago, at Google’s I/O conference, amid a series of AI-related announcements, the corporate announced that the Search Generative Experience had been renamed AI Overviews and can be available as a typical experience in Google Search to all users, starting with those within the United States

There are ways to show the feature off or perform Google searches without AI overviews (namely via the Web tab in Google Search), but on this case, users may have to take some extra steps to accomplish that.

Why is the Google AI overview controversial?

Since Google enabled AI Overviews because the default for users within the US, some people have been posting on X and other social sites concerning the terrible, horrible, and no-good results this feature produces when trying to find various queries.

In some cases, the AI-powered feature displays completely false, inflammatory and downright dangerous information.

Even celebrities like musician Lil Nas X have joined in the joy:

Other results are more harmless, but still mistaken and make Google seem silly and untrustworthy:

The poor quality results generated by AI have taken on a lifetime of their very own and have even change into a meme, with some users retouching answers into screenshots to make Google look even worse than the actual results already do:

Google has classified the AI ​​Overview feature as “experimental” and added the next text at the tip of every result: “Generative AI is experimental” and with a link to a Page describing the brand new feature in additional detail.

On this page, Google writes: “AI overviews could make search easier by providing an AI-generated snapshot of key information and links to dig deeper… With user feedback and human reviews, we responsibly evaluate and improve the standard of our results and products.”

Will Google withdraw AI/Overview?

But some users took to X (formerly Twitter) to call on Google or predict that the search giant would remove the AI ​​Overview feature, a minimum of temporarily — much like the course Google took after its Gemini AI image-generating feature was found to be producing racially and historically inaccurate images earlier this yr, infuriating distinguished Silicon Valley libertarians and politically conservative figures like Marc Andreessen and Elon Musk.

In an announcement to The edgeA Google spokesperson said of the AI ​​Overview feature that users could be shown examples

Additionally, The edge reported that:

But as some have noted at X, this sounds very very similar to victim blaming.

Others have argued that AI developers might be held legally answerable for dangerous results similar to those presented within the AI ​​overview:

Importantly, tech journalists and other digitally savvy users have noticed that Google appears to be using its AI models to create summaries of content it has previously indexed in its search index – content that doesn’t originate from Google but that it still relies on to supply its users with “necessary information.”

Ultimately, it’s difficult to say what percentage of searches display this misinformation.

One thing is evident, nevertheless: AI Overview appears to be more vulnerable than Google Search before it to misinformation from untrustworthy sources or to information posted as a joke that the underlying AI models answerable for the summary don’t understand as such but treat as serious.

It stays to be seen whether users will actually act on the knowledge provided in these results, but in the event that they do, it’s clearly unwise and should pose risks to their health and safety.

Let's hope that users are smart enough to ascertain alternative sources. For example, rival AI search engine startup Perplexity, which currently seems to have less trouble finding accurate information than Google's AI Overviews (an unlucky irony for the search engine giant and its users, considering that Google was the primary to design and formulate the Transformer machine learning architecture that’s at the center of the fashionable generative AI/LLM boom).

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read