Google Deepmind has created a robot that plays table tennis as well as an amateur human player.
To develop the robot, the researchers first collected a small data set of human players. They then used reinforcement learning to train the robot in simulation. Special techniques allowed them to transfer this training knowledge to real hardware without additional examples - a so-called "zero-shot" approach to transferring simulated robots to the real world.
The robot then played against humans to generate more training data. As it improved, the matches and maneuvers became more complex, while remaining grounded in the real world. The robot can also adapt to new opponents in real time.
Hierarchical control and adaptation to opponents
The system uses a library of low-level skills like forehand topspin and backhand targeting. Researchers collected data on the strengths and limitations of each skill. A high-level controller then chooses the best skill based on game stats, skill info, and the opponent's abilities.
In tests, the robot played 29 matches against humans of varying skill levels. It won 45 percent overall, beating all beginners and 55 percent of intermediate players. However, it lost every match against the most skilled opponents.
Deepmind says its latest work demonstrates how robots can master complex real-world tasks requiring physical skill, perception, and strategy. Table tennis has been a key benchmark for robotics research since the 1980s because the game requires both basic skills, such as returning the ball, and higher-level strategic planning.