Content
summary Summary

Geoffrey Hinton, often called the "Godfather of AI," is urging researchers to design machines with nurturing instincts to protect humanity as AI systems surpass human intelligence.

Ad

Speaking at the Ai4 conference in Las Vegas, Hinton argued that trying to keep machines permanently subservient won't work. Instead of acting as the boss, he said, humans should relate to future superintelligent AI more like a child does to its mother.

Hinton outlined a vision where a less intelligent being guides a smarter one - like a mother being led by her child. He believes AI research should focus not only on making machines smarter, but also on making them more caring, so they look after their "babies." Hinton sees potential for genuine international cooperation, since every country wants AI to support, not replace, people. After more than a decade at Google, Hinton left to speak more freely about AI's risks.

Meta's chief AI scientist calls for built-in guardrails

Yann LeCun, Chief AI Scientist at Meta, described Hinton's proposal on LinkedIn as a simplified version of an approach he's advocated for years: wiring AI architectures so that systems can only take actions to achieve specific goals - with strict guardrails in place. LeCun calls this "objective-driven AI." Examples of guardrails include subservience to humans and empathy, along with many simple, low-level rules like "Don't run people over" and "Don't swing your arm around when people are nearby, especially if you're holding a kitchen knife."

Ad
Ad

LeCun says these hard-coded goals would serve as the AI equivalent of instincts and drives found in animals and humans. Evolution has hardwired parental instincts that drive care, protection, and sometimes deference to offspring. As a side effect, humans and many species are also inclined to protect, befriend, and nurture helpless or cute creatures from other species - even those they might otherwise eat, LeCun notes on LinkedIn.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Geoffrey Hinton is calling on AI researchers to design machines with nurturing instincts, arguing that trying to keep advanced AI systems permanently subservient is unrealistic and that future AI should care for humanity much like a mother cares for her child.
  • Hinton’s vision shifts the focus from simply making machines smarter to ensuring they are also caring and protective, and he believes this approach could encourage international cooperation around safe AI development.
  • Meta’s chief AI scientist, Yann LeCun, supports a similar idea, advocating for built-in guardrails and hard-coded goals—such as empathy and strict safety rules—that would guide AI behavior and act as artificial instincts to protect people.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.