HomeIndustriesDeepMind builds table tennis robot that beats beginners 100% of the time

DeepMind builds table tennis robot that beats beginners 100% of the time

Researchers at Google DeepMind have developed an AI-controlled robot able to playing competitive table tennis at an amateur level.

Detecting the presence of a table tennis ball, calculating its direction and moving the racket to hit it – all in a fraction of a second – is a mammoth task in robotics.

DeepMind's robot is supplied with an IRB 1100 robotic arm mounted on two linear gantries that allow it to maneuver quickly across and toward the table.

He has an incredible range of motion, reaching most areas of the table to hit the ball with a racket like a human does.

The “eyes” are high-speed cameras that capture images at 125 frames per second and transmit the information to a neural network-based perception system that tracks the ball’s position in real time.

The AI ​​that controls the robot uses a classy two-stage system:

  1. Low-level controllers (LLCs): These are specialized neural networks trained to perform specific table tennis techniques, reminiscent of forehand topspin shots or backhand targets. Each LLC is designed to excel in a selected aspect of the sport.
  2. High-Level Controller (HLC): This is the strategic brain of the system. The HLC chooses which LLC to make use of for every incoming ball based on the present rating, the opponent's play style and the robot's capabilities.

This dual approach allows the robot to mix precise execution of individual shots with higher-level strategy, mimicking the way in which human players think in regards to the game.

Bridge between simulation and the true world

One of the largest challenges in robotics is transferring skills learned in simulation environments to the true world.

The DeepMind study documents several techniques to resolve this issue:

  1. Realistic physics modelling: Researchers used advanced physics engines to model the complex dynamics of table tennis, including ball spin, air resistance, and racket-ball interactions.
  2. Domain randomization: During training, the AI ​​was exposed to a big selection of simulated conditions, helping it generalize to the variations it’d encounter in the true world.
  3. Sim-to-Real Adaptation: The team developed methods to adapt the simulated capabilities to real-world performance, including a novel “spin correction” technique to handle differences in paddling behavior between simulation and reality.
  4. Iterative data collection: The researchers repeatedly updated their training data with real game data, making a continually improving learning cycle.

Perhaps considered one of the robot's most impressive features is its ability to adapt in real time. During a game, the system tracks various statistics about its own performance and that of its opponent.

Based on this information, it spontaneously adapts its strategy, learns to use weaknesses within the opponent's game and at the identical time strengthens its own defense.

Ping Pong Robot Review

So how did DeepMind test its table tennis robot?

First, the team recruited 59 volunteer players and assessed their table tennis skills, categorizing them as beginner, intermediate, skilled and skilled+. From the initial pool, 29 participants of all skill levels were chosen for the complete study.

A particular player then competed against the robot in three matches, following modified table tennis rules to accommodate the robot's limitations.

In addition to collecting quantitative data from the robot, the researchers conducted short, semi-structured interviews with each participant after the sport about their overall experience.

Results

Overall, the robot won 45% of its games and showed a solid overall performance.

It dominated the beginners (100% of games won) and held its own against advanced players (55% won), but struggled against intermediate and advanced players (all games lost).

Fortunately for us mere mortals, there was at the least one major weakness: the robot had problems with backspin, which was a big weak point in comparison with more experienced players.

Even if you happen to can't play table tennis in any respect or think you're only mediocre at it, this robot will fancy its probabilities high.

Barney J. Reed, table tennis coach, commented on the study“It's really impressive to observe the robot play against players of all levels and styles. Our goal was to get the robot to an intermediate level. Amazingly, it has done just that, all of the exertions has paid off.”

“I feel the robot has even exceeded my expectations. It was a real honor and pleasure to be a part of this research. I learned a lot and am very grateful to everyone I used to be capable of work with on it.”

This is way from DeepMind's first foray into sports robotics and AI. Not way back, they built AI soccer robots that would pass, tackle, and shoot.

DeepMind has been providing AI robotics tools to developers for years and has recently made breakthroughs in robot vision and dexterity.

As AI and robotics advance, we will expect to see more examples of machines mastering tasks that were once considered exclusively human domains.

The day is probably not far off when you possibly can challenge a robot to a game of table tennis at your local people center – but don't be surprised if it beats you in the primary round.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read