HomeIndustriesIt's not only AI that hallucinates

It's not only AI that hallucinates

Stay up so far with free updates

It could also be premature to extrapolate from a sample size of 1 (me). But I admit that my memory is just not perfect: I forget some things, confuse others, and sometimes “remember” events that never happened. I believe some FT readers may be similarly confused. An intelligent machine might call this a human hallucination.

We talk so much about generative AI models that hallucinate facts. We cringe as we take a look at the lawyer who filed a court document with fictitious cases invented by ChatGPT. An FT colleague who got the chatbot to supply a graph of the price of coaching generative AI models was surprised to seek out that the most costly model he identified didn't exist (unless the model has access to insider information). As every user quickly discovers: these models are unreliable – identical to people. The interesting query is: Are machines more correctable than us? It might prove easier to rewrite code than to rewire the brain.

One of the perfect examples of the fallibility of human memory was the statement of John Dean, White House counsel under Richard Nixon. In the 1973 Watergate hearings, Dean was known as “the human tape recorder” due to his remarkable memory. But unbeknownst to Dean, Nixon had installed an actual tape recorder within the Oval Office. Researchers were due to this fact capable of compare Dean's account of critical conversations with the written transcriptions.

A 1981 paper analyzing Dean's statement stated: Psychologist Ulric Neisser identified several glaring errors and reinterpretations of conversations within the lawyer's report – in addition to the issue of defining truth and accuracy. In his work, Neisser differentiated between semantic and episodic memory. Dean was roughly right in recalling the general content of his conversations with Nixon – and the character of the Watergate cover-up – even when he was completely improper about the small print of certain episodes.

One could argue that enormous language models do the alternative: given all the information they ingest, they need to have good episodic memory (although they will produce garbage output with garbage input). But they still have poor semantic memory. Although an LLM would likely summarize the Oval Office notes more faithfully as Dean recalled the conversations months later, he would haven’t any contextual understanding of the meaning of that content.

Researchers are working on ways to further improve the episodic memory of generative AI models and reduce hallucinations. A recent paper from Google DeepMind researchers proposed a brand new methodology called Safe – Search-Augmented Factuality Evaluator. Model-generated answers are broken down into individual sentences and compared with Google searches for factuality or factual accuracy. The article claims that this experimental system outperforms human fact-checking annotators by way of accuracy and is greater than 20 times cheaper.

“In the following few years we are going to have the opportunity to confirm the output of huge language models with good accuracy. I believe that’s pretty useful,” one in every of the paper’s authors, Quoc Le, tells me. Hallucinations are each a welcome feature of LLMs in the case of creativity and a flaw to be suppressed in the case of facticity, he says.

Meanwhile, LLMs can still confuse creativity and factuality. For example, once I asked Microsoft Bing's co-pilot to inform me the world record for crossing the English Channel on foot, he confidently replied: “The world record for crossing the English Channel entirely on foot is held by Christof Wandratsch of Germany, who completed this “Crossing in 14 hours and 51 minutes on August 14, 2020.” Conveniently, this fact was even quoted. Unfortunately, it turned out that the hint was correct an article published last yr Highlighting the hallucinations created by ChatGPT.

We should focus not only on how content is created, but additionally on the way it lands, says Maria Schnell, Chief Language Officer at RWS. which provides technology-enabled text and translation services to greater than 8,000 customers in 548 language mixtures. In a world where content is becoming cheaper and more ubiquitous, tailoring information to a particular audience in a format, language and cultural context that understands it becomes much more necessary, and that requires a human touch.

“Accuracy is relatively easy to automate. Relevance is just not a given,” says Schnell. “We must take into consideration how content is received, and that is where AI struggles.”

At least for now, humans and machines can work fruitfully together to boost their different capabilities and minimize their respective deficiencies.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read