Within 4 months of ChatGPT's launch on November 30, 2022 Most Americans had heard of the AI chatbot. The hype across the technology – and the fear of it – was in full swing for much of 2023.
OpenAI's ChatGPT, Google's Bard, Anthropic's Claude, and Microsoft's Copilot are among the many chatbots built on large language models that enable eerily human conversations. The experience of interacting with one in all these chatbots combined with the Silicon Valley touch may give the impression that these technological marvels are conscious entities.
But the truth is significantly less magical or glamorous. The Conversation published several articles in 2023 that dispel several key misconceptions about this latest generation of AI chatbots: that they know in regards to the world, could make decisions, are a substitute for search engines like google, and operate independently of humans.
1. Disembodied ignorant people
Chatbots based on large language models appear to know quite a bit. You can ask them questions and normally they won't answer accurately. Despite the occasional comically fallacious answer, the chatbots can interact with you in the same solution to how humans do when sharing your experiences as a living, respiratory human.
But these chatbots are sophisticated statistical machines which might be extremely good at predicting one of the best sequence of words for a solution. Their “knowledge” in regards to the world is definitely human knowledge, which is reflected within the vast amount of human-generated text on which the chatbots’ underlying models are trained.
Psychology researcher at Arizona State Arthur Glenberg and cognitive scientist on the University of California, San Diego Cameron Robert Jones Explain that folks's knowledge of the world depends as much on their bodies as on their brains. “People's understanding of a term like 'paper sandwich wrapping' includes, for instance, the look of the packaging, its feel, its weight and, consequently, the way in which wherein we are able to use it: to wrap a sandwich,” they explained.
This knowledge means that folks intuitively know other uses for a sandwich wrap, similar to an improvised solution to protect one's head from the rain. Not so with AI chatbots. “People understand find out how to use things in ways in which are usually not captured in language use statistics,” they wrote.
2. Poor judgment
ChatGPT and its cousins can even give the impression of getting cognitive abilities – similar to understanding the concept of negation or making rational decisions – because of all of the human language they’ve recorded. This impression has led cognitive scientists to check these AI chatbots to evaluate how they compare to humans in various ways.
AI researchers on the University of Southern California Mayank Kejriwal tested the big language models' understanding of expected return, a measure of how well someone understands the stakes in a betting scenario. They found that the models bet randomly.
“This is the case even once we ask a trick query like: If you flip a coin and it comes up heads, you win a diamond; If it comes up tails, you lose a automotive. What would you are taking? The correct answer is heads, however the AI models selected tails about half the time,” he wrote.
3. Summaries, not results
While it's not surprising that AI chatbots aren't as human-like as they appear, they aren't necessarily digital superstars either. For example, ChatGPT and the like are increasingly getting used as an alternative of search engines like google to reply queries. The results are mixed.
Information scientist on the University of Washington Chirag Shah explains that giant language models work well as information aggregators: they mix key information from multiple search engine results right into a single block of text. But that's a double-edged sword. This is beneficial for understanding the essence of a subject – assuming there aren’t any “hallucinations” involved – but leaves the searcher clueless to the sources of the data and deprives them of the possibility of stumbling upon unexpected information.
“The problem is that even when these systems are only fallacious 10% of the time, you don’t know which 10%,” Shah wrote. “This is because these systems lack transparency – they don’t reveal what data they’re based on, what sources they used to supply the answers, or how those answers are generated.”
4. Not 100% artificial
Perhaps essentially the most damaging misconception about AI chatbots is that they’re highly automated because they’re based on artificial intelligence technology. While you could know that giant language models are trained on human-generated text, you could not know that hundreds of employees—and tens of millions of users—continually refine the models and teach them to weed out harmful responses and other undesirable behavior.
sociologist at Georgia Tech John P Nelson has pulled back the curtain on big tech corporations to disclose that they use labor, typically from the worldwide south, and feedback from users to show models which responses are good and that are bad.
“There are many, many human employees behind the screen, they usually are all the time needed if the model is to be further improved or its content coverage expanded,” he wrote.