Content
summary Summary

Renowned AI researcher Geoffrey Hinton made headlines when he left Google to warn the world about AI threats such as mass fake news and autonomous weapons. He kept a low profile with another thesis.

Ad

Hinton argues that human-like intelligence can only be achieved, and possibly surpassed, through deep learning - a view that has both supporters and critics in expert circles.

In a talk at King's College in London, Hinton expressed another thesis that is likely to stir emotions in the AI industry.

Asked whether AI systems might one day have emotional intelligence and understand that they have feelings, Hinton replied, "I think they could well have feelings. They won't have pain the way you do unless we wanted, but things like frustration and anger, I don't see why they shouldn't have those."

Ad
Ad

Hinton's view is based on a definition of feelings that is "unpopular among philosophers," which is to relate a hypothetical action ("I feel like punching Gary on the nose") as a way of communicating an emotional state (anger). Since AI systems can make such communications, the AI researcher sees no reason why AI systems should not be ascribed emotions. In fact, he suggests that they "probably" already have emotions.

He has not said this publicly before because his first thesis, that superior AI threatens humanity, has already met with resistance. If he had added his thesis about machine emotions, Hinton says, people would have called him crazy and stopped listening.

In practice, Hinton's thesis is unlikely to be verifiable or falsifiable, since LLMs could only reflect statically probable emotions in emotional utterances that they have learned through training. Whether they actually have their own emotions as an entity would probably have to be answered by clarifying consciousness. However, there is no scientific instrument to measure consciousness.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • AI researcher Geoffrey Hinton believes that in the future, AI systems will be able to feel emotions such as frustration and anger because they can narrate hypothetical actions associated with emotions.
  • Hinton has not yet made his thesis public because his warnings about the threat of overwhelming AI have been met with resistance, and he would have been declared insane had he added this emotional thesis.
  • Whether AI systems actually have emotions of their own may depend on the clarification of consciousness. However, there is currently no instrument to measure consciousness, so Hinton's thesis cannot be verified or falsified.
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.