Meta’s AI pioneer Yann LeCun talks about the three big challenges on the road to the next generation of artificial intelligence.
Born in 1960, AI researcher Yann LeCun is considered one of the world’s most important AI researchers. Among other things, he was involved in the invention of Convolutional Neural Nets, which led to a breakthrough in AI image analysis. In 2018, LeCun received the Turing Award, the highest honor in computer science, for his research.
In 2013, Mark Zuckerberg hired the AI researcher for Facebook, where he helped establish the Facebook AI Research Lab (FAIR). LeCun is still chief AI scientist and vice president there today.
Despite all the successes of AI research, LeCun doesn’t see artificial intelligence as being even on the cognitive level of a house cat, he revealed in a talk around 2018. It lacks a rudimentary understanding of the world, he said.
Self-supervised learning is the key to AI understanding the world
Even in 2022, LeCun doesn’t see artificial intelligence at cat level. Despite a meager 800 million neurons, he said, the cat brain is far ahead of any giant artificial neural network. Speculation about a path to the highly developed cognitive abilities and long-term planning of human intelligence thus seemed pointless at first glance.
But the common foundation of cats and humans is a highly developed understanding of the world, grounded in abstract representations of their environment that form models, for example, to predict actions and their consequences, LeCun said. The ability to learn such models of the environment is the key to thinking machines, he said.
LeCun derives three major challenges for AI research from this:
- AI must learn to represent the world.
- AI must learn to think and plan in ways that are compatible with gradient-based learning.
- AI must learn hierarchical representations of action plans.
LeCun sees the solution to the first challenge in self-supervised learning, which is used, for example, in training language models or image analysis systems.
The successful use of these systems shows that AI is capable of building complex models of the world. Instead of language or images, however, the next AI generation will learn directly from videos. Meta is currently putting a lot of effort into collecting video data from the first-person perspective for this new AI generation, but YouTube videos are also suitable training material, according to LeCun.
LeCun believes that AI systems can learn about the physical foundations of our world from such videos. Their understanding would in turn be the basis for numerous abilities, such as grasping objects or driving a car.
AI should learn to think and act
Solving the first challenge lays the foundation for solving the second: Systems that learn reasoning using the same method that make Deep Learning so successful, essentially gradient-based learning. For LeCun, the learned, complex models of the world are the key to creating thinking machines.
LeCun said he has no solution yet for the third challenge. An AI system that is to act in the real world – whether as a robot or an autonomous vehicle – must be able to anticipate the consequences of its actions and choose the best action in each case, he said. In simple cases, such as moving a robot arm or controlling a rocket, this is already possible (Model Predictive Control).
But in the future, he says, systems will be needed that can handle all other scenarios as well: “It’s not just about the trajectory of a missile or the movement of a robotic arm, which can be modeled through careful mathematics. It’s about everything else, everything we observe in the world: About human behavior, about physical systems that involve collective phenomena like water or branches in a tree, about complex things for which humans can easily develop abstract representations and models,” LeCun said.
LeCun sums up the next decade’s big challenge for AI research in one question: How can we get machines to learn models that deal with uncertainty and capture the real world with all its complexity?
For LeCun, the answer starts with self-supervised learning.
Featured Image: O’Reilly Internal bei Flickr, lizenziert nach CC BY-NC 2.0
Correction: Adjusted description of LeCun’s position on reasoning systems. He does not explicitly argue against symbolic systems.