What is the real risk of AI moving forward: regulation or openness?
Yann LeCun, head of AI research at Meta, comments on Twitter.com about the potential consequences of regulating open AI research and development. He warns that regulation could lead to a few companies controlling the AI industry. In his view, this is the most dangerous scenario imaginable.
LeCun criticizes AI research leaders
LeCun's tweet was directed at AI pioneers Geoff Hinton, Yoshua Bengio, and Stuart Russell, who have repeatedly and publicly expressed concerns about the potential negative impacts of AI.
According to LeCun, the majority of the academic community supports open AI research and development, with AI pioneers Hinton, Bengio, and Russell as notable exceptions.
He argues that their "fear-mongering" provides ammunition for corporate lobbyists. The real AI disaster would be if a few corporations took control of AI, LeCun says.
Specifically, LeCun accuses OpenAI CEO Sam Altman, GoogleDeepmind CEO Demis Hassabis, and OpenAI Chief Scientist Ilya Sutskever of massive corporate lobbying and attempting to regulate the AI industry in their favor under the guise of safety.
"I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous," LeCun writes.
Safe and open AI is possible - and desirable
LeCun advocates a combination of human creativity, democracy, market forces, and product regulation to drive the development of AI systems. He believes that safe and controllable AI systems are possible.
Meta AI's chief scientist is researching a new autonomous AI architecture that can be safely controlled by objectives and guardrails. He believes that the fuss about the dangers of current AI models, especially large LLMs, is overblown.
In his tweet, LeCun rejects the notion that AI is an uncontrollable natural phenomenon, emphasizing that it is developed by people and organizations that are capable of doing the "right things."
Calling for regulation of AI research and development implies that these individuals and organizations are incompetent, reckless, self-destructive, or evil, he says.
LeCun is a strong proponent of open-source AI platforms, arguing that they are critical to ensuring that AI systems reflect the full range of human knowledge and culture. He envisions a future where contributions to AI platforms come from the community, much like Wikipedia.
LeCun warns that if open-source AI were regulated, a handful of companies from the U.S. and China would take control of AI platforms, posing significant risks to democracy and cultural diversity. "This is what keeps me up at night," LeCun writes.
LeCun's employer, Meta, backs open-source AI
LeCun's employer is Meta, which with its Llama language model has been and will continue to be instrumental in the development of open-source AI. The current Llama 2 model is at about GPT-3.5 level, and the upcoming Llama 3 is expected to reach GPT-4.
Meta's goal may be to make its own open-source models a kind of operating system for many AI applications, similar to Google's Android for smartphones. Meta could then make money from additional services, and it would also have developers on its side who would benefit from a broad development standard.
However, LeCun is also one of the most respected AI researchers and could choose his employers. His statements should therefore reflect his beliefs, not Meta's corporate strategy. LeCun may have initiated Meta's open-source strategy in the first place.
Hinton, Bengio, and LeCun were jointly awarded the 2019 Turing Award, the most prestigious prize in computer science, for their contributions to deep learning.