Content
summary Summary

What is the real risk of AI moving forward: regulation or openness?

Ad

Yann LeCun, head of AI research at Meta, comments on Twitter.com about the potential consequences of regulating open AI research and development. He warns that regulation could lead to a few companies controlling the AI industry. In his view, this is the most dangerous scenario imaginable.

LeCun criticizes AI research leaders

LeCun's tweet was directed at AI pioneers Geoff Hinton, Yoshua Bengio, and Stuart Russell, who have repeatedly and publicly expressed concerns about the potential negative impacts of AI.

According to LeCun, the majority of the academic community supports open AI research and development, with AI pioneers Hinton, Bengio, and Russell as notable exceptions.

Ad
Ad

He argues that their "fear-mongering" provides ammunition for corporate lobbyists. The real AI disaster would be if a few corporations took control of AI, LeCun says.

Specifically, LeCun accuses OpenAI CEO Sam Altman, GoogleDeepmind CEO Demis Hassabis, and OpenAI Chief Scientist Ilya Sutskever of massive corporate lobbying and attempting to regulate the AI industry in their favor under the guise of safety.

"I have made lots of arguments that the doomsday scenarios you are so afraid of are preposterous," LeCun writes.

Safe and open AI is possible - and desirable

LeCun advocates a combination of human creativity, democracy, market forces, and product regulation to drive the development of AI systems. He believes that safe and controllable AI systems are possible.

Meta AI's chief scientist is researching a new autonomous AI architecture that can be safely controlled by objectives and guardrails. He believes that the fuss about the dangers of current AI models, especially large LLMs, is overblown.

Recommendation

In his tweet, LeCun rejects the notion that AI is an uncontrollable natural phenomenon, emphasizing that it is developed by people and organizations that are capable of doing the "right things."

Calling for regulation of AI research and development implies that these individuals and organizations are incompetent, reckless, self-destructive, or evil, he says.

LeCun is a strong proponent of open-source AI platforms, arguing that they are critical to ensuring that AI systems reflect the full range of human knowledge and culture. He envisions a future where contributions to AI platforms come from the community, much like Wikipedia.

LeCun warns that if open-source AI were regulated, a handful of companies from the U.S. and China would take control of AI platforms, posing significant risks to democracy and cultural diversity. "This is what keeps me up at night," LeCun writes.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

LeCun's employer, Meta, backs open-source AI

LeCun's employer is Meta, which with its Llama language model has been and will continue to be instrumental in the development of open-source AI. The current Llama 2 model is at about GPT-3.5 level, and the upcoming Llama 3 is expected to reach GPT-4.

Meta's goal may be to make its own open-source models a kind of operating system for many AI applications, similar to Google's Android for smartphones. Meta could then make money from additional services, and it would also have developers on its side who would benefit from a broad development standard.

However, LeCun is also one of the most respected AI researchers and could choose his employers. His statements should therefore reflect his beliefs, not Meta's corporate strategy. LeCun may have initiated Meta's open-source strategy in the first place.

Hinton, Bengio, and LeCun were jointly awarded the 2019 Turing Award, the most prestigious prize in computer science, for their contributions to deep learning.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Meta's head of AI research, Yann LeCun, expressed concern on Twitter about regulating open AI research and development, warning that it could lead to a few companies controlling the AI industry.
  • LeCun advocates a combination of human creativity, democracy, market forces, and product regulation to drive AI development, and believes that safe and controllable AI systems are possible.
  • He advocates open-source AI platforms to ensure that AI systems reflect human knowledge and culture. He warns that regulating open-source AI could lead to a handful of companies taking control, posing risks to democracy and cultural diversity.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.