HomeNewsTo construct a greater AI helper, start by modeling people's irrational behavior

To construct a greater AI helper, start by modeling people's irrational behavior

To construct AI systems that may work effectively with humans, it is useful to first have a great model of human behavior. But people are likely to behave suboptimally when making decisions.

This irrationality, which is especially difficult to model, can often be attributed to computational limitations. An individual cannot spend a long time enthusiastic about the perfect solution to a single problem.

Researchers at MIT and the University of Washington have developed a solution to model the behavior of an agent, whether human or machine, that takes under consideration the unknown computational limitations that may affect the agent's problem-solving abilities.

Your model can robotically infer an agent's computational limitations by seeing few traces of its previous actions. The output, called an agent's “inference budget,” may be used to predict that agent's future behavior.

In a brand new paper, the researchers reveal how their method may be used to infer an individual's navigational goals from previous routes and to predict players' subsequent moves in chess games. Their technique matches or exceeds one other popular method for modeling such a decision making.

Ultimately, this work could help scientists teach AI systems how humans behave, which could allow these systems to raised reply to their human collaborators. The ability to know a human's behavior after which infer their goals could make an AI assistant way more useful, says Athul Paul Jacob, an electrical engineering and computer science (EECS) graduate student and lead writer of a book Paper on this method.

“If we all know that a human is about to make a mistake after previously seeing how they behaved, the AI ​​agent could intervene and offer a greater solution to do it.” Or the agent could adapt to the weaknesses of its human employees. The ability to model human behavior is a vital step towards developing an AI agent that may actually help that human,” he says.

Jacob co-authored the paper with Abhishek Gupta, assistant professor on the University of Washington, and senior writer Jacob Andreas, associate professor of EECS and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research might be presented on the International Conference on Learning Representations.

Model behavior

Researchers have been creating computer models of human behavior for a long time. Many previous approaches try to account for suboptimal decision making by adding noise to the model. Instead of the agent at all times selecting the precise option, the model could guide the agent to make the precise alternative 95 percent of the time.

However, these methods may not capture the proven fact that people don’t at all times behave suboptimally in the identical ways.

Others at MIT have also studied simpler ways to plan and derive goals within the face of suboptimal decision making.

When constructing their model, Jacob and his collaborators were inspired by previous studies of chess players. They found that in easy plays, players needed less time to think before acting, and that in more difficult games, stronger players tended to spend more time planning than weaker ones.

“Ultimately, we saw that the depth of planning, or how long someone thinks concerning the problem, is a very good indicator of how people behave,” says Jacob.

They developed a framework that might infer an agent's planning depth from previous actions and use this information to model the agent's decision-making process.

The first step of their method is to run an algorithm for a set time period to unravel the issue under study. For example, when studying a chess game, they could run the chess algorithm for a certain variety of steps. At the top, researchers can see what decisions the algorithm made at each step.

Their model compares these decisions to the behavior of an agent solving the identical problem. It compares the agent's decisions with the algorithm's decisions and identifies the step at which the agent stopped planning.

From this, the model can determine the agent's inference budget or how long that agent will spend on this problem. Using the inference budget, it will possibly predict how that agent would react when solving an identical problem.

An interpretable solution

This method may be very efficient because researchers can access all the choices made by the problem-solving algorithm with none additional effort. This framework may be applied to any problem that may be solved using a particular class of algorithms.

“The most striking thing for me was the proven fact that this inference budget could be very easy to interpret. It means harder problems require more planning or being a robust player means planning longer. When we first got down to do that, we didn’t think our algorithm could detect these behaviors naturally,” says Jacob.

The researchers tested their approach in three different modeling tasks: inferring navigation goals from previous routes, guessing an individual's communication intent based on their verbal cues, and predicting subsequent moves in human-human chess games.

Their method matched or exceeded a preferred alternative in every experiment. Additionally, the researchers found that their model of human behavior matched well with measures of player skill (in chess games) and task difficulty.

In the longer term, the researchers wish to use this approach to model the planning process in other areas, resembling reinforcement learning (a trial-and-error method often utilized in robotics). In the long run, they wish to proceed to expand this work to realize the larger goal of developing simpler AI employees.

This work was supported partly by the MIT Schwarzman College of Computing Artificial Intelligence for Augmentation and Productivity program and the National Science Foundation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read