AI research

AI researcher Geoffrey Hinton thinks AI has or will have emotions

Matthias Bastian

Midjourney prompted by THE DECODER

Renowned AI researcher Geoffrey Hinton made headlines when he left Google to warn the world about AI threats such as mass fake news and autonomous weapons. He kept a low profile with another thesis.

Hinton argues that human-like intelligence can only be achieved, and possibly surpassed, through deep learning - a view that has both supporters and critics in expert circles.

In a talk at King's College in London, Hinton expressed another thesis that is likely to stir emotions in the AI industry.

Asked whether AI systems might one day have emotional intelligence and understand that they have feelings, Hinton replied, "I think they could well have feelings. They won't have pain the way you do unless we wanted, but things like frustration and anger, I don't see why they shouldn't have those."

Hinton's view is based on a definition of feelings that is "unpopular among philosophers," which is to relate a hypothetical action ("I feel like punching Gary on the nose") as a way of communicating an emotional state (anger). Since AI systems can make such communications, the AI researcher sees no reason why AI systems should not be ascribed emotions. In fact, he suggests that they "probably" already have emotions.

He has not said this publicly before because his first thesis, that superior AI threatens humanity, has already met with resistance. If he had added his thesis about machine emotions, Hinton says, people would have called him crazy and stopped listening.

In practice, Hinton's thesis is unlikely to be verifiable or falsifiable, since LLMs could only reflect statically probable emotions in emotional utterances that they have learned through training. Whether they actually have their own emotions as an entity would probably have to be answered by clarifying consciousness. However, there is no scientific instrument to measure consciousness.