A robot dog trained in a computer simulation can transfer its knowledge to reality. Not only that: it runs faster and more dynamically than robots programmed by humans.
The impressive motor skills demonstrations by robot manufacturer Boston Dynamics generate millions of clicks on the Internet. Rightly so, precisely because they are impressive. But they are primarily a reflection of human engineering and programming skills. The movements of Spot, Atlas and Co. are mostly remote-controlled or programmed manually. That’s an enormous amount of work.
Robot dog learns to walk in a simulation
Researchers at MIT are now demonstrating the superiority of machine-learned robotic movements compared to human-designed ones. The Cheetah 3 mini dog robot is trained from scratch via trial-and-error in a computer simulation in many environments.
Cheetah can recall the motor knowledge learned in the simulation in real life. In addition to new high speeds of up to almost 3.9 m/s and enormous agility, he also demonstrates a special talent for movement on difficult terrain.
Cheetah moves much more safely and dynamically on gravel, for example, even when running downhill and constantly slipping slightly on the tiny stones. Even an icy patch on the road doesn’t throw the mini dog robot off its stride. According to the researchers, the top speed is a robot record.
To exclude human involvement in the training process as much as possible, the researchers rely on so-called model-free reinforcement learning. The robot starts learning motions in the simulation without any prior human knowledge or rules. Using a trial-and-error method, it creates its own model, which it continuously expands and optimizes.
Machine beats humans – and makes robot movements more scalable
Extensive simulation training means humans no longer have to tell the robot how to behave in every situation. Cheetah already brings a wide range of terrain experience to the table, even before it takes its first step in a real-world environment.
According to the researchers, Cheetah can learn the equivalent of 100 days of experiential motor knowledge on difficult terrain in just three hours of simulation training. In the real world, the controller integrated into the robot recognizes the skills learned in the simulation for specific situations and executes them in real time.
“At the heart of artificial intelligence research is the trade-off between what the human needs to build in (nature) and what the machine can learn on its own (nurture),” the researchers write. Traditionally, humans would provide a lot of guidance, but this process is not scalable because it is too complex, they say.
“A more practical way to build a robot with many diverse skills is to tell the robot what to do and let it figure out the how. Our system is an example of this.”
The researchers say they are already applying the model-free learning approach in a simulation to other robotic systems, including a hand that can pick up and manipulate different objects.
Read more about Artificial Intelligence:
- AI training: Artificial intelligence becomes more flexible
- Artificial intelligence to fly US fighter jets in 2024
- Hypernetworks: Artificial intelligence builds itself