Geoffrey Hinton, often called the "Godfather of AI," is urging researchers to design machines with nurturing instincts to protect humanity as AI systems surpass human intelligence.
Speaking at the Ai4 conference in Las Vegas, Hinton argued that trying to keep machines permanently subservient won't work. Instead of acting as the boss, he said, humans should relate to future superintelligent AI more like a child does to its mother.
Hinton outlined a vision where a less intelligent being guides a smarter one - like a mother being led by her child. He believes AI research should focus not only on making machines smarter, but also on making them more caring, so they look after their "babies." Hinton sees potential for genuine international cooperation, since every country wants AI to support, not replace, people. After more than a decade at Google, Hinton left to speak more freely about AI's risks.
Meta's chief AI scientist calls for built-in guardrails
Yann LeCun, Chief AI Scientist at Meta, described Hinton's proposal on LinkedIn as a simplified version of an approach he's advocated for years: wiring AI architectures so that systems can only take actions to achieve specific goals - with strict guardrails in place. LeCun calls this "objective-driven AI." Examples of guardrails include subservience to humans and empathy, along with many simple, low-level rules like "Don't run people over" and "Don't swing your arm around when people are nearby, especially if you're holding a kitchen knife."
LeCun says these hard-coded goals would serve as the AI equivalent of instincts and drives found in animals and humans. Evolution has hardwired parental instincts that drive care, protection, and sometimes deference to offspring. As a side effect, humans and many species are also inclined to protect, befriend, and nurture helpless or cute creatures from other species - even those they might otherwise eat, LeCun notes on LinkedIn.