HomeIndustriesDeepMind trains robot soccer players to kick, attack and defend

DeepMind trains robot soccer players to kick, attack and defend

Researchers at Google's DeepMind have achieved a milestone in robotics by successfully training 20-inch-tall humanoid robots to play one-on-one soccer games.

Her studypublished in Science Robotics, details how they used deep reinforcement learning (RL) to show the robots complex locomotion and gaming skills.

Those available commercially Robotis OP3 robot learned to run, kick, block, stand up from falls and rating goals – all without manual programming.

Instead, AI agents controlling the robots acquired these skills through trial and error in simulated environments, guided by a reward system.

This is how the robot soccer system works:

  1. First, they trained separate neural networks, called “skill guidelines,” for basic movements like walking, kicking, and standing up. Each skill was learned in a focused environment that rewarded the robot for mastering that specific skill.
  2. The individual competency policies were then merged right into a single master policy network using a method called policy distillation. This unified policy could activate the suitable capability depending on the situation.
  3. The researchers then further optimized the master policy through self-games, during which the robot played simulated games against previous versions of itself. This iterative process led to continuous improvements in strategy and gameplay.
  4. To prepare the rule of thumb for real-world use, the simulated training environment was randomized for aspects akin to friction and robot mass distribution. This helped make the policy more robust to physical fluctuations.
  5. Finally, after only simulation training, the finished policy was uploaded to real OP3 robots, which then played physical soccer games without further fine-tuning.

To be honest, it needs to be seen to be believed, so watch Popular ScienceThe videos below.

The results, as you possibly can see, are quite remarkable: dynamic and nimble, turning to vary direction and coordinating their limbs to pedal and balance at the identical time.

They fall over, but there is no such thing as a diving. A number of slide tackles could be cool though. Would you even have red and yellow cards in robot football?

I feel a football version of BattleBots might be across the corner.

DeepMind describes their success within the work as follows: “The resulting agent exhibits robust and dynamic movement capabilities, akin to: “It also learns to anticipate ball movements and block opponent shots.”

Compared to a more standard, rule-based policy programmed specifically for the OP3, DeepMind's RL approach delivered much better performance.

The AI-trained robots walked 181% faster, turned 302% faster, recovered from falls 63% faster, and kicked the ball 34% harder.

Analysis of the neural networks revealed an increasing understanding of soccer tactics, akin to assessing ball possession and defending the goal when an opponent approaches.

Combined with DeepMind's advances in AI-optimized football training in collaboration with Liverpool FC, we’re likely heading towards a more digitized era in sport.

It's probably only a matter of time before we get a robot league where customized robots compete in highly dynamic competitive sports.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read