Ilya Sutskever, co-founder and longtime chief scientist at OpenAI, has formed a new company called Safe Superintelligence Inc. (SSI) with two partners. Their ambitious goal is to develop safe superintelligent AI.
Sutskever has teamed up with investor Daniel Gross and former OpenAI engineer Daniel Levy to launch the company, which is based in Palo Alto and Tel Aviv.
The company aims to solve what it calls "most important technical problem of our time": creating superintelligent AI that is safe and reliable. To accomplish this, SSI plans to assemble a "lean, cracked team of the world’s best engineers and researchers."
"SSI is our mission, our name, and our entire product roadmap, because it is our sole focus," the announcement states. "Our team, investors, and business model are all aligned to achieve SSI."
The company aims to advance AI capabilities and safety in parallel, making transformative leaps in both technology and science. Safe, super-intelligent AI will be the company's sole R&D focus. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead."
Sutskever left OpenAI in mid-May after a decade with the company. Many AI safety researchers left with him. After Sutskever's departure, OpenAI CEO Sam Altman disbanded the superintelligent AI safety team that Sutskever had led.
As an OpenAI board member, Sutskever was involved in the temporary removal of Altman as CEO in November 2023. According to reports, he had previously raised concerns about the rapid commercialization Altman was pushing and the safety risks it posed.
Now, the AI pioneer, who co-authored the seminal AlexNet paper in 2012, appears to be addressing those concerns head-on with his own company.