Researchers at Google DeepMind have used reinforcement learning to teach small humanoid robots soccer skills like turning, kicking, and chasing the ball.
The soccer robots were first trained in a simulation using the MuJoCo physics engine, and then transferred their skills to small Robotis OP3 humanoid robots with 20 articulated joints.
Training took place in two phases: first, the robots learned basic skills such as standing up and scoring goals. Then, these skills were combined into a single robot that played against increasingly stronger opponents to adapt to different game situations.
To bridge the "sim-to-real gap" between relatively simple computer simulations and the complex real world, the team added disruptive forces and random events to the simulator. This allowed the robots to learn, through trial and error, how to deal with unexpected perturbations in the real world.
AI training beats traditional programming
In experiments, the AI-trained robot ran 181% faster, turned 302% faster, stood up 63% faster, and kicked the ball 34% faster than similar manually programmed robots, DeepMind reports.
The robot also learned sophisticated defensive behavior, adapting its stride length to the game, combining moves to score a goal, anticipating the ball's path, and blocking opponents' shots - demonstrating a fundamental understanding of 1v1 soccer.
DeepMind sees this work as a step toward training general-purpose robots, not just robots for specific tasks. That means figuring out the minimum amount of guidance they need to learn agile motor skills, while also tapping into the capabilities of multimodal foundation models.
"Ultimately, while the results are fun to watch, this research is part of our long-term goal to bring robots into our everyday lives - in a way that is helpful, empowering, and safe," the company writes.