HomeArtificial IntelligenceGoogle's use of AI to support search shows its problematic approach to...

Google's use of AI to support search shows its problematic approach to organizing information

From government documents to news, commerce, music and social interactions, much of the world’s information is now online. And Google, founded in 1998 with the mission to “organize the world’s information and make it universally accessible and useful“” is the best way we access this flood of data and culture.

In April 2024, Google’s search engine will taken under consideration 90 percent of the Canadian search market. For academics, it’s specialized Google Scholar And Google Books are the pillars of our research life.

Although Google Search is a vital infrastructure, Google itself is recklessly sabotaging it in a socially harmful way that requires strong regulatory motion.

Reinventing search

On May 14 Google announced It revamped its core search website to create a central location for generative AI content, with the goal of “reinventing search.” One of its first rollouts, AI Overviewsis a chatbot that uses a big language model (LLM) to supply authoritative-sounding answers to questions as a substitute of requiring users to click to a different website.

Google's business model relies on promoting revenue, which impacts search results.
(Shutterstock)

OpenAI’s launch of ChatGPT in November 2022 sparked the hype around generative AI. But by now, most users should know that LLM-powered chatbots are unreliable sources of data. This is because they only High-performance pattern recognition machines. The output they generate in response to a question is generated via probability: each word or a part of a picture is chosen based on the probability that it appears in the same image or phrase in its database.

To be clear, LLMs usually are not a type of intelligence, artificial or otherwise. They cannot “think”. For LLMs, the one truth is the reality of the correlation between the contents of their database.

Therefore it was each very funny and completely predictable if AI Overview users began to report that Google asked them, amongst other things, to “about 1/8 cup non-toxic glue” to the pizza sauce in order that the cheese doesn’t slip off the pizza, that geologists recommend that individuals eat a small stone every day and that There are not any African countries whose names begin with the letter K.

These weren’t “errors” within the sense of reporting false information. AI Overviews did exactly what LLMs all the time do: They reported statistically likely links of text or images based on what was of their database. They don’t, and can’t, evaluate truth claims.

After this flood of ridicule and mockery, Google finally recognized the criticism. Although it claims that it would work on improving AI overviews, the character of LLMs as statistical machines likely means because it is alleged that “AI overviews will all the time be broken.”

As amusing as these stories are, and despite Google's response, in addition they raise troubling questions on our dependence on a single company for a service we once entrusted to public libraries: organizing and making the world's information accessible.

Drastic effects

Google Search has two fundamental flaws which are becoming increasingly difficult to disregard as their impact becomes increasingly drastic.

First, Google's reliance on promoting revenue has led it to limit its search functionality as a way to deliver paid promoting to users. Observers have long noted that Google's prioritization of paid promoting in search has made the product worse for its users since it puts the interests of advertisers and Google first.

This concentrate on promoting also has a domino effect on the complete (advertising-based) knowledge ecosystem, because it puts Google in direct competition for promoting dollars with media corporations that depend on Google search to be found by potential readers.

This conflict was a central justification of the Canadian controversial Online News Actwhich requires corporations like Google and Meta to barter payments to Canadian news media. This conflict will only worsen: Products like AI Overview are clearly designed to get users to spend more time on Google fairly than clicking through the underlying website.

What is less well-known is that Google's approach to knowledge itself drives this reckless disregard for accuracy and truth in its search results. Google and far of Silicon Valley are committed to an ideology that Dutch media scientist José van Dijck calls “dataism”: the idea that data speaks for itself and may be interpreted regardless of an external context.

As I and my co-author Natasha Tusikov explore in our book, For the dataist, correlations are synonymous with truth. This is a anti-scientific worldview This ignores basic scientific methodological standards of validity (how will we know something is true?) and reliability (can we reproduce the outcomes?).

a grid of screens
Each word or a part of a picture is chosen by an AI search based on the probability that it appears in the same image or phrase within the database.
(Shutterstock)

The concept that correlations correspond to truth is at the center of Google's search algorithm. Simply put, search results usually are not objective: Google Search ranks (non-paid) results in line with their popularity, which is set by which and what number of pages link to them. Note that this popularity contest could be very different from the expert opinion that librarians use when choosing books for a library and categorizing them in a card catalog.

Access to knowledge

The societal damage brought on by dependence on a corrupted means of knowledge organization can hardly be overestimated. Access to sound knowledge is crucial for each a part of society. Google's promoting dependence and data-driven ideology have brought the corporate to the purpose where it’s actively sabotaging our knowledge ecosystem.

This sabotage requires a robust regulatory response. To be clear: Google Search must be run by individuals with the ethics of librarians, not tech bros.

To achieve this goal, governments must set minimum standards for search to be sure that it delivers sufficiently high-quality results. These standards should include banning links between promoting and search results and using search data for personalized promoting.

In addition, serps and all global platforms should be subject to democratic control at home, while remaining interoperable across borders in coordination with other like-minded democratic countries.

None of those steps can be easy. But unless we're OK with continuing to let the world's information be organized by a ruthless, profit-driven corporation that has no problem putting out a product that tells people it's healthy to eat rocks, we actually don’t have any alternative but to bring Google to heel.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read