Content
summary Summary

Ilya Sutskever, co-founder and longtime chief scientist at OpenAI, has formed a new company called Safe Superintelligence Inc. (SSI) with two partners. Their ambitious goal is to develop safe superintelligent AI.

Ad

Sutskever has teamed up with investor Daniel Gross and former OpenAI engineer Daniel Levy to launch the company, which is based in Palo Alto and Tel Aviv.

The company aims to solve what it calls "most important technical problem of our​​ time": creating superintelligent AI that is safe and reliable. To accomplish this, SSI plans to assemble a "lean, cracked team of the world’s best engineers and researchers."

Image: Sutskever via X

"SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," the announcement states. "Our team, investors, and business model are all aligned to achieve SSI."

Ad
Ad

The company aims to advance AI capabilities and safety in parallel, making transformative leaps in both technology and science. Safe, super-intelligent AI will be the company's sole R&D focus. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."

Sutskever left OpenAI in mid-May after a decade with the company. Many AI safety researchers left with him. After Sutskever's departure, OpenAI CEO Sam Altman disbanded the superintelligent AI safety team that Sutskever had led.

As an OpenAI board member, Sutskever was involved in the temporary removal of Altman as CEO in November 2023. According to reports, he had previously raised concerns about the rapid commercialization Altman was pushing and the safety risks it posed.

Now, the AI pioneer, who co-authored the seminal AlexNet paper in 2012, appears to be addressing those concerns head-on with his own company.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Ilya Sutskever, co-founder and former chief scientist of OpenAI, together with investor Daniel Gross and former OpenAI engineer Daniel Levy, has founded Safe Superintelligence Inc (SSI) to do just that: develop safe superintelligence.
  • The company, based in Palo Alto and Tel Aviv, aims to recruit a small team of the world's best engineers and researchers to advance the capabilities and safety of AI in parallel, making breakthrough technical and scientific advances.
  • Sutskever left OpenAI in May 2024 after nearly ten years and clashes with OpenAI CEO Sam Altman. The reason was probably concerns about the rapid commercialization of AI and the associated security risks under Altman. With SSI, he now seems to be addressing those concerns with his own company.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.