Content
summary Summary

OpenAI co-founder Ilya Sutskever used his honorary doctorate address at the University of Toronto to make a case for accepting what he sees as an unstoppable AI future.

Ad

Sutskever told the audience, "We all live in the most unusual time ever. And this is something that people might say often, but I think it's actually true this time. And the reason it's true this time is because of AI, right?"

He pointed out that AI is already changing what it means to be a student, and said its impact on work is "starting to change a little bit in some unknown and unpredictable ways."

Instead of focusing on technical details and scientific facts, Sutskever leaned on his own intuition: "The brain is a biological computer. So why can't the digital computer, a digital brain, do the same things?"

Ad
Ad

For Sutskever, that's the "one sentence summary" for why AI will be able to match human abilities: "Anything which I can learn, anything which any one of you can learn, the AI could do as well."

He described the current state of AI as "evocative"—advanced enough to let people imagine what might be possible, but still "so deficient."

Sutskever sees superintelligence as an inevitable future

Sutskever emphasized that the timeline is uncertain—three, five, maybe ten years—but the direction is clear. "AI will do all of our... all the things that we can do. Not just some of them, but all of them." The big question, he said, is what happens then: "Those are dramatic questions."

He pointed to possible outcomes like more research, faster economic growth, and more automation, leading to a period where "the rate of progress will become really extremely fast for some time at least," resulting in "unimaginable things."

But he insisted there's no way to opt out. Quoting a familiar saying about politics taking interest in you even if you don't take interest in it, Sutskever said the same principle applies to AI "many times over."

Recommendation

"And in some sense, whether you like it or not, your life is going to be affected by AI to a great extent," Sutskever concluded.

Sutskever left OpenAI in May 2024 after internal disputes among the company's leadership. He then founded Safe Superintelligence (SSI), a startup focused on developing safe superintelligent AI.

Details about SSI's work have not been made public, but late in 2024, Sutskever indicated that the previous scaling principles for AI had reached their limits and that new approaches were needed. Despite having no product or revenue, SSI is already valued at over $30 billion.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Former OpenAI chief scientist Ilya Sutskever spoke at the University of Toronto, urging people to prepare for the major shifts artificial intelligence will cause.
  • He said that AI could soon take on roles in every sector that were once exclusive to humans, and questioned how society should direct these new capabilities.
  • Sutskever argued that because the human brain is a biological computer, digital systems could ultimately replicate everything humans can do.
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.