Content
summary Summary

Update, May 4, 2023:

Hinton tells the British newspaper The Guardian that after his public resignation from Google, Bernie Sanders, Elon Musk, and the White House contacted him to discuss the risks of AI.

Hinton describes himself as a "socialist" and says he believes that "the media" and "the means of computation" should not be privately owned. Google, he says, has acted as responsibly as possible within a capitalist system, but is ultimately beholden to its shareholders.

Hinton does not have a specific solution to the risks of AI in mind. He just "suddenly become aware" that something "really bad" could happen, he says.

Ad
Ad

"We need to think hard about it now, and if there’s anything we can do. The reason I’m not that optimistic is that I don’t know any examples of more intelligent things being controlled by less intelligent things."

In particular, authoritarian regimes could benefit from AI technology by "destroying truth" and manipulating elections. In the U.S., he said, these possibilities face a divided populace that can't even agree to stop selling assault rifles to private citizens.

Original article from May 1, 2023:

Geoffrey Hinton is one of the world's most prominent AI researchers. Now he is wrapping up his career at Google - and saying goodbye with a warning.

About a decade ago, renowned computer scientist and cognitive psychologist Geoffrey Hinton laid the groundwork for today's advanced AI systems like ChatGPT with his research on artificial neural networks, deep learning, and especially backpropagation. He was awarded the so-called Nobel Prize in Computer Science, the Turing Award.

Recommendation

In April, Hinton quit his job at Google so that he could, as he said, freely criticize the development of AI. Now, in the New York Times, he speaks openly about his concerns about the rapid development of artificial intelligence.

"Look at how it was five years ago and how it is now," Hinton said of AI progress. "Take the difference and propagate it forwards. That’s scary."

Part of him regrets his life's work, Hinton said. He consoles himself with the "normal excuse" in such cases: if he hadn't done it, someone else would have.

Hinton believes he has miscalculated the speed of AI development by 30 to 50 years

Hinton's first fear is the mass dissemination of fake news, videos, and photos so that people no longer know what is true. It's hard to imagine how to prevent abuse, he said. Hinton is also critical of AI's impact on the job market, as AI could potentially do more than just tedious work.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Another of Hinton's concerns is the development of AI-based autonomous weapons, which he has criticized in the past. By his admission, Hinton has also significantly underestimated the speed of AI development.

"The idea that this stuff could actually get smarter than people — a few people believed that," Hinton said. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Until a year ago, Google had a good handle on the risks of the technology and was careful not to release anything that could cause harm. But Microsoft has unleashed a race that may be unstoppable without global regulation, Hinton said.

His best hope is that leading scientists will find ways to control the technology. Until they do, he said, they should not develop it further. "I don’t think they should scale this up more until they have understood whether they can control it," he said.

Hinton recalls Robert Oppenheimer, considered the "father of the atomic bomb". Oppenheimer justified his work on a potentially dangerous technology solely because of its feasibility. He used to quote Oppenheimer regularly, Hinton said. Not anymore.

With Deep Learning to human-like AI?

The 75-year-old Briton believes that human-like artificial intelligence can be achieved through deep learning alone. If appropriately trained AI systems were scaled up sufficiently, they would be able to reproduce the full range of human intelligence.

He changed his mind when he studied Google's and OpenAI's large language models. These are inferior to the human brain in some ways, but far superior in others, Hinton said. What happens in these systems is "actually a lot better" than what happens in the human brain, according to Hinton.

Whether the scale of existing AI systems is sufficient for human-like or general AI is debatable. Well-known researchers such as Meta's AI chief Yann LeCun and Gary Marcus believe that fundamentally different architectures are needed. Critics compare trying to achieve general AI with deep learning to trying to climb a ladder to the moon.

By contrast, there is little controversy about the thesis that AI systems, whether they are generally intelligent or not at all, but merely cognitively powerful, can have a significant impact on our lives.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Geoffrey Hinton is one of the most prominent AI researchers, and his work has laid the groundwork for today's rapid advances in AI.
  • Now he is leaving Google - and in an interview, he warns of the risks of the technology, from fake news to autonomous weapons.
  • Hinton is so worried about AI that part of him regrets his life's work.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.