HomeNewsThe reasoning capabilities of huge language models are sometimes overestimated

The reasoning capabilities of huge language models are sometimes overestimated

When it involves artificial intelligence, appearances will be deceptive. The mystery surrounding how large language models (LLMs) work comes from their enormous size, complex training methods, difficult-to-predict behavior, and elusive interpretability.

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) recently took a better take a look at how LL.M.s perform on variations of various tasks. They uncovered fascinating insights into the interplay between memory and considering skills. It seems that their considering skills are sometimes overestimated.

The study compared “standard tasks,” the same old tasks a model is trained and tested on, with “counterfactual scenarios,” hypothetical situations that deviate from the usual conditions – which models like GPT-4 and Claude normally deal with. The researchers developed some tests outside the models' comfort zones by tweaking existing tasks moderately than creating entirely latest ones. They used a wide range of datasets and benchmarks specifically tailored to different features of the models' capabilities, equivalent to arithmetic, chess, code evaluation, answering logical questions, etc.

When users interact with language models, all arithmetic is often done in base 10, the number base familiar to the models. However, if we observe them performing well in base 10, this might give us the misunderstanding that they’ve strong skills as well as. If they really have good addition skills, one would logically expect reliably high performance in all number bases, just like calculators or computers. In fact, research has shown that these models are usually not as robust as many initially think. Their high performance is restricted to common task variants and suffers from a consistent and sharp drop in performance within the unfamiliar counterfactual scenarios, suggesting an absence of generalizable addition skills.

The pattern held for a lot of other tasks as well, equivalent to fingering musical chords, spatial reasoning, and even chess problems where the starting positions of the pieces were barely altered. While human players are expected to still have the ability to find out the legality of moves in altered scenarios (given enough time), the models struggled and couldn’t perform higher than by random guessing, meaning they’ve limited ability to generalize to unfamiliar situations. And much of their performance on the usual tasks is probably going due to not general task ability, but to overfitting to or directly memorizing what they saw of their training data.

“We discovered an enchanting aspect of huge language models: They excel in familiar scenarios, almost like a well-trodden path, but struggle when the terrain becomes unfamiliar. This finding is crucial as we seek to enhance the adaptability of those models and expand their application horizons,” says Zhaofeng Wu, an MIT PhD student in electrical engineering and computer science, CSAIL member, and lead creator of a brand new Paper in regards to the research. “As AI becomes more ubiquitous in our society, it must have the ability to reliably handle different scenarios, whether familiar or not. We hope that these findings will at some point feed into the event of future LLMs with improved robustness.”

Despite the teachings learned, there are in fact limitations. The study's concentrate on specific tasks and settings didn’t capture the total range of challenges the models might face in real-world applications, suggesting the necessity for more diverse test environments. Future work could involve expanding the range of tasks and counterfactual conditions to uncover more potential vulnerabilities. This could mean exploring more complex and fewer common scenarios. The team also wants to enhance interpretability by developing methods to higher understand the explanations behind the models' decision-making processes.

“As language models scale, understanding their training data becomes increasingly difficult even for open models, let alone proprietary ones,” says Hao Peng, assistant professor on the University of Illinois at Urbana-Champaign. “The community stays confused about whether these models can actually generalize to unexpected tasks or whether or not they appear to succeed by memorizing the training data. This paper makes necessary progress in answering that query. It creates a series of rigorously designed counterfactual evaluations and provides latest insights into the capabilities of state-of-the-art LLMs. It shows that their ability to resolve unexpected tasks could also be way more limited than many expected. It has the potential to encourage future research into identifying the failure modes of today's models and developing higher models.”

Other authors include Najoung Kim, assistant professor at Boston University and visiting researcher at Google, and 7 CSAIL members: MIT Electrical Engineering and Computer Science (EECS) doctoral students Linlu Qiu, Alexis Ross, Ekin Akyürek SM '21, and Boyuan Chen, former postdoc and Apple AI/ML researcher Bailin Wang, and EECS assistant professors Jacob Andreas and Yoon Kim.

The team's study was supported partly by MIT–IBM Watson AI Lab, MIT Quest for Intelligence, and the National Science Foundation. The team presented the work last month on the North American chapter of the Association for Computational Linguistics (NAACL).

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read