Premature regulation of AI could strengthen the dominance of big tech companies and stifle competition, warns Meta's chief researcher Yann LeCun.
LeCun believes that regulating AI research and development could be counterproductive and lead to "regulatory capture" under the guise of AI safety.
He blames calls for AI regulation on the "superiority complex" of leading technology companies, which claim that only they are trustworthy enough to develop AI safely.
LeCun called this attitude "incredibly arrogant" and advocated a more open approach to AI development. Meta relies on open-source models like LLaMA, which encourage competition and allow a wider variety of people to develop and use AI systems, LeCun said.
Critics of Meta's strategy, on the other hand, worry that putting powerful generative AI models in the hands of potentially malicious actors could increase the risks of disinformation, cyber warfare, and bioterrorism.
The renowned AI researcher made the comments to the Financial Times ahead of the Bletchley Park Conference on AI Security, organized by the British government in November.
Don't fear the Terminator
LeCun called the idea that today's AI could lead to the annihilation of humanity "preposterous." People, he said, have been conditioned by science fiction and the "Terminator" scenario to believe that intelligent machines will take over the moment they become smarter than humans.
But intelligence and the drive for dominance are not synonymous, said LeCun, who sees humans as an apex species even in the age of super AI. According to LeCun, today's AI models are not as powerful as some researchers make them out to be. They lack understanding of the world, planning, and true reasoning.
LeCun accuses OpenAI and Google DeepMind in particular of being "consistently over-optimistic." Human-like AI, he says, is much more complex than today's systems and requires several "conceptual breakthroughs."
He suggests that AI could be controlled by building "moral character" into these systems, similar to how laws regulate human behavior. The startup Anthropic is taking this approach with a constitution for the chatbot Claude, and OpenAI has said it is also experimenting with this approach.
The machine is ahead eventually
But LeCun also believes there is "no question" that machines will surpass human intelligence in most areas, which he sees as a positive: It could lead to a second renaissance in learning. In addition, capable AI systems could help humanity tackle major challenges such as climate change and curing disease.
LeCun envisions a future where everyone has access to AI assistants that support everyday life and make it easier to interact with the digital world. "We’re not going to use search engines anymore," he said.
In the spring of 2022, LeCun presented his vision of a future "autonomous AI" that could bring AI closer to human-like intelligence. The architecture consists of six modules: configurator, perception, world model, cost, actor, and short-term memory. The world model module is based on Joint Embedding Predictive Architectures (JEPA) and is the core of the proposed architecture. It enables unsupervised learning with large amounts of complex data and generates abstract representations. However, many questions remain open.