HomeNewsUsing ideas from game theory to enhance the reliability of language models

Using ideas from game theory to enhance the reliability of language models

Imagine you and a friend are playing a game where your goal is to convey secret messages to one another using only cryptic sentences. Your friend's task is to guess the key message behind your sentences. Sometimes you give clues directly, and sometimes your friend has to guess the message by asking yes or no inquiries to the clues you give. The challenge is that you just each need to be certain you understand one another properly and agree on the key message.

Researchers on the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed the same “game” to enhance the best way AI understands and generates text. It is generally known as a “consensus game” and involves two parts of an AI system: one part tries to generate sentences (e.g. giving hints), and the opposite part tries to know and evaluate those sentences (e.g . to guess the key message).

The researchers found that by treating this interaction as a game, through which each parts of the AI ​​work together based on certain rules to agree on the proper message, they significantly increased the AI's ability to supply correct and coherent answers to questions could improve. They tested this recent gamified approach on a wide range of tasks, similar to reading comprehension, solving math problems, and conducting conversations, and located that it helped the AI ​​perform higher across the board.

Traditionally, large language models respond in two ways: they generate answers directly from the model (generative query) or use the model to judge a set of predefined answers (discriminative query), which may produce different and sometimes incompatible results. Using the generative approach, “Who is the President of the United States?” could yield an easy answer like “Joe Biden.” However, a discriminatory query might falsely deny this fact when evaluating the identical answer, for instance, “Barack Obama.”

So how can we reconcile mutually incompatible assessment methods to provide coherent, efficient predictions?

“Imagine a brand new technique to help language models understand and generate text like a game. We have developed a training-free, game-theoretic method that treats your complete process as a fancy game of cues and signals through which a generator attempts to “Send the proper message to a discriminator using natural language, but words and sentences,” says Athul Jacob, MIT doctoral student in electrical engineering and computer science and CSAIL partner. “Our way of controlling this game is to search out the 'approximate equilibria', which ends up in a brand new decoding algorithm called 'equilibrium rating.'” It's a reasonably exciting demonstration of how incorporating game theory strategies can overcome some big challenges, to make language models more reliable and consistent.”

When tested on many tasks similar to reading comprehension, reasoning, math problem solving, and dialogue, the team's algorithm continually improved the performance of those models. Using the ER algorithm with the LLaMA-7B model even outperformed the outcomes of much larger models. “Given that they’re already competitive and other people have been working on them for some time, the extent of improvement we saw that we were capable of outperform a model ten times the scale was a nice surprise,” says Jacob.

Continue to play

Diplomacy, a strategic board game set in pre-World War I Europe through which players negotiate alliances, betray friends and conquer territory without using dice – relying solely on skill, strategy and interpersonal manipulation – recently made a second comeback. In November 2022, computer scientists including Jacob developed “Cicero,” an AI agent that achieves human-level abilities within the seven-player mixed motif game, requiring the identical skills mentioned above, but using natural language. The mathematics behind it partly inspired the Consensus Game.

Although the history of AI agents long predates OpenAI software's arrival in chat in November 2022, it’s well documented that they will still play along as your well-meaning but pathological friend.

The consensus game system achieves equilibrium as an agreement, ensuring accuracy and fidelity to the model's original findings. To achieve this, the strategy iteratively adjusts the interactions between the generative and discriminative components until they reach consensus on a solution that accurately reflects reality and is consistent with their original beliefs. This approach effectively bridges the gap between the 2 query methods.

In practice, implementing the consensus game approach to question language models, especially for question-answer tasks, involves significant computational challenges. For example, when using datasets like MMLU, which contain 1000’s of questions and multiple alternative answers, the model needs to use the mechanism to every query. Then, for every query and its possible answers, a consensus should be reached between the generative and discriminative components.

The system struggled with one primary school passage: math word problems. No incorrect answers could possibly be generated, which is a critical component to understanding the means of finding the proper answer.

“In recent years there have been truly impressive advances in each strategic decision-making and language generation through AI systems, but we are only starting to work out the way to bring the 2 together. The equilibrium rating is a primary step on this direction, but I feel there remains to be rather a lot we will do to increase it to more complex problems,” says Jacob.

One avenue for future work is to enhance the bottom model by integrating the outcomes of the present method. This is especially promising because it may well result in more factual and consistent answers across different tasks, including facticity and open generation. The potential of such a way to significantly improve the performance of the bottom model is high, which may lead to more reliable and factual results from ChatGPT and similar language models that folks use day-after-day.

“Although modern language models similar to ChatGPT and Gemini have led to the flexibility to resolve various tasks through chat interfaces, the statistical decoding process that generates a solution from such models has remained unchanged for a long time,” says Google research scientist Ahmad Beirami involved within the work. “The MIT researchers' proposal is an modern game theory framework for decoding language models by solving the equilibrium of a consensus game. The significant performance gains reported within the research are promising and open the door to a possible paradigm shift in language model decoding that would fuel a flood of latest applications.”

Jacob co-authored the paper with MIT-IBM Watson Lab researcher Yikang Shen and MIT Department of Electrical Engineering and Computer Science assistant professors Gabriele Farina and Jacob Andreas, who can also be a CSAIL member. They presented their work on the International Conference on Learning Representations (ICLR) earlier this month, where it was highlighted as a “Spotlight Paper.” The research also received a “Best Paper Award” on the NeurIPS R0-FoMo workshop in December 2023.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read