Content
summary Summary

Google Deepmind has created a robot that plays table tennis as well as an amateur human player.

Ad

To develop the robot, the researchers first collected a small data set of human players. They then used reinforcement learning to train the robot in simulation. Special techniques allowed them to transfer this training knowledge to real hardware without additional examples - a so-called "zero-shot" approach to transferring simulated robots to the real world.

The robot then played against humans to generate more training data. As it improved, the matches and maneuvers became more complex, while remaining grounded in the real world. The robot can also adapt to new opponents in real time.

Hierarchical control and adaptation to opponents

The system uses a library of low-level skills like forehand topspin and backhand targeting. Researchers collected data on the strengths and limitations of each skill. A high-level controller then chooses the best skill based on game stats, skill info, and the opponent's abilities.

Ad
Ad

In tests, the robot played 29 matches against humans of varying skill levels. It won 45 percent overall, beating all beginners and 55 percent of intermediate players. However, it lost every match against the most skilled opponents.

Deepmind says its latest work demonstrates how robots can master complex real-world tasks requiring physical skill, perception, and strategy. Table tennis has been a key benchmark for robotics research since the 1980s because the game requires both basic skills, such as returning the ball, and higher-level strategic planning.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Google Deepmind has developed a robot that can play table tennis at the level of an amateur human player. Table tennis has been a benchmark for robotics research since the 1980s.
  • The robot was first trained using a small amount of data from human players. The researchers then trained the robot in simulation using reinforcement learning and transferred the controls to the real hardware without any further examples.
  • In tests against 29 human opponents, the robot won 45 percent of the games. It won all games against novices, 55 percent against intermediate players, but lost to the most advanced players.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.