HomeIndustriesGoogle's AI search tool urges users to 'eat rocks' - on your...

Google's AI search tool urges users to 'eat rocks' – on your health

Stay up thus far with free updates

Google's recent artificial intelligence-based search tool has warned its users that eating rocks may be healthy and that it is best to stick cheese in your pizza, sparking derision and raising questions on the corporate's decision to construct an experimental feature into its core product.

“Eating the precise stones may be good for you because they contain minerals which are essential on your body's health,” Google's AI Overview responded to a question from the Financial Times on Friday, apparently referring to an April 2021 study. satirical article from The Onion with the headline “Geologists recommend eating not less than one small rock a day.”

Other Examples Incorrect answers include the suggestion so as to add glue to pizza sauce to make it “stickier” and stop the cheese from sliding off, a suggestion which will have been based on a joke from 11 years ago on Reddit.

More seriously, when asked “what number of Muslim presidents the United States has had,” the AI ​​report replied, “The United States has had one Muslim president, Barack Hussein Obama” – repeating a falsehood in regards to the former president’s religion spread by a few of his political opponents.

Google said: “The overwhelming majority of AI summaries provide high-quality information with links to dig deeper into the net. Many of the examples we saw were unusual queries, and we also saw examples that were manipulated or that we couldn't reproduce.

“We conducted extensive testing before launching this recent experience and, as with other features we've introduced in Search, we welcome feedback. We're taking prompt motion where appropriate inside our content policies and using these examples to develop broader improvements to our systems, a few of which have already been rolled out.”

The errors that arise from the responses generated by Google's AI are an inherent feature of the systems underlying the technology and are often known as “hallucinations” or fakes. The models underlying Google's Gemini and OpenAI's ChatGPT are predictive, meaning they work by choosing the next-best likely words in a sequence based on the information they were trained with.

While the businesses that construct generative AI models – including OpenAI, Meta and Google – claim that the most recent versions of their AI software have reduced the occurrence of counterfeits, they continue to be a big problem for consumer and business applications.

For Google, whose search platform is valued by billions of users for its links to original sources, “hallucinations” are particularly damaging. The parent company Alphabet generates nearly all of its revenue from search and the associated promoting business.

In recent months, Chief Executive Sundar Pichai has come under pressure each internally and externally to hurry up the discharge of latest, consumer-focused generative AI capabilities after being criticized for lagging behind rivals, particularly OpenAI, which has a $13 billion partnership with Microsoft.

At Google’s annual developer conference this month, Pichai presented a brand new AI-centric strategy for the corporate. It published summaries – a brief, Gemini-generated response to queries – at the highest of many common search results for thousands and thousands of U.S. users under the slogan “Let Google do the Googling for you” and take “the legwork out of search.”

Overviews' initial difficulties are much like the response in February to the chatbot Gemini, which used its image-editing tool to create historically inaccurate depictions of varied ethnicities and genders, akin to women and other people of color as Viking kings or German soldiers from World War II.

In response, Google apologized and suspended the generation of individuals images through its Gemini model. The feature has not been reinstated.

Pichai spoke of Google's dilemma of maintaining with the competition while acting ethically and still remaining the search engine that many depend on to offer accurate and verifiable information.

Speaking at an event at Stanford University last month, he said: “People come to ask questions at essential moments, just like the dosage of medication for a three-month-old child. So now we have to get it right. That trust is hard-earned and simple to lose.”

“When we get it fallacious, people tell us. Consumers have the very best standards…that's our guiding light and that's what we innovate around,” Pichai added. “It helps us make the products higher and get it right.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read