Since ChatGPT was released in late 2022, hundreds of thousands of individuals have began using large language models to access knowledge. And their appeal is straightforward to know: ask a matter, get a complicated summary, and move on—it appears like effortless learning.
However, a brand new paper I co-authored provides experimental evidence that this facilitation may come at a price: When people depend on large language models to summarize information on a subject for them, they have a tendency to accomplish that develop shallower knowledge about this in comparison with learning via a typical Google search.
Co-author Jin Ho Yun and meeach professors of selling, reported this finding in a paper based on seven studies with greater than 10,000 participants. Most studies used the identical basic paradigm: Participants were asked to study a subject—find out how to start a vegetable garden, for instance—and were randomly assigned to make use of either an LLM like ChatGPT or the “old-fashioned method” of navigating links using a typical Google search.
There were no restrictions on using the tools. They could search on Google so long as they wanted and proceed to contact ChatGPT in the event that they felt they needed more information. After they accomplished their research, they were asked to jot down advice to a friend on the subject based on what they learned.
The data showed a consistent pattern: individuals who learned a few topic through an LLM in comparison with web searches felt that they learned less, put less effort into subsequently writing their advice, and ultimately wrote advice that was shorter, less factual, and more general. In turn, when this recommendation was presented to an independent sample of readers who didn’t know what tool was used to learn the subject, they found the recommendation less informative, less helpful, and fewer prone to adopt it.
We found that these differences are robust across different contexts. For example, one possible reason that LLM users wrote shorter and more general advice is just that the LLM results offered users less diverse information than the Google results. To rule out this possibility, we conducted an experiment by which participants were presented with similar facts in the outcomes of their Google and ChatGPT searches. Likewise, in one other experiment, we kept the search platform – Google – constant and varied whether participants learned from standard Google results or Google's AI overview function.
The results confirmed that learning from synthesized LLM answers, even when the facts and platform were held constant, resulted in additional superficial knowledge than gathering, interpreting, and synthesizing information for oneself via standard web links.
Why it matters
Why did using LLMs appear to interfere with learning? One of probably the most fundamental principles of skill development is that individuals learn best once they are actively engage with the fabric they try to learn.
When we study a subject through Google search, we face lots more “friction”: we have now to navigate through various web links, read information sources, and interpret and summarize them ourselves.
Although this friction is more difficult, it results in the event of a deeper, more original mental representation of the respective topic. However, with LLMs, this whole process is carried out on behalf of the user, transforming learning from a more energetic to a passive process.
What's next?
To be clear, we don’t imagine that the answer to those problems is to desert LLMs, especially given the undeniable benefits they provide in lots of contexts. Rather, our message is that individuals simply have to change into smarter or more strategic users of LLMs – which starts with understanding the areas by which LLMs are useful or detrimental to their goals.
Do you would like a fast, factual answer to a matter? Feel free to make use of your favorite AI co-pilot. However, in case your goal is to develop deep and generalizable knowledge in a field, counting on LLM syntheses alone shall be less helpful.
As a part of my research on the psychology of latest technologies and latest media, I’m also thinking about whether it is feasible to make LLM learning a more energetic process. In one other experiment We tested this by having participants engage with a special GPT model that offered real-time web links alongside synthesized answers. However, there we found that when participants received an LLM summary, they weren’t motivated to delve deeper into the unique sources. The result was that participants still developed lower levels of proficiency in comparison with those that used standard Google.
Building on this, my future research plans to explore generative AI tools that impose healthy frictions on learning tasks – specifically, examining what kinds of guardrails or speed bumps are most successful in motivating users to actively learn more beyond easy, synthesized answers. Such tools seem like particularly necessary in secondary education, where a serious challenge for educators is how best to enable students to develop basic reading, writing and arithmetic skills while preparing for an actual world where LLMs are prone to be an integral a part of their each day lives.

