Content
summary Summary

Jan Leike, the former head of AI alignment and superintelligent AI safety at OpenAI, has sharply criticized his former employer for a lack of safety priorities and processes.

Ad

Leike joined OpenAI because he believed it was the best place in the world to do AI safety research specifically for superintelligence. But for some time, he disagreed with OpenAI's leadership about the company's core priorities, until a "breaking point" was reached, Leike says.

Leike, until yesterday the head of AI Alignment and Superintelligence Safety at OpenAI, announced his resignation from the company shortly after renowned AI researcher, co-founder, and AI safety guru Ilya Sutskever also left the company.

Leike now says he is convinced that OpenAI needs to devote much more computing bandwidth to preparing and securing the next generations of AI models.

Ad
Ad

When OpenAI announced its super-AI alignment team last summer, it said it would dedicate 20 percent of its then-available computing power to the safety team. Apparently, that promise was not kept.

Ein Screenshot des Twitter-Threads von Jan Leike.
Image: Jan Leike via X

AI safety is a big challenge, Leike says, and OpenAI might be on the wrong track.

"Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity," writes Leike.

For the past few months, Leike's team has been "sailing against the wind." At times, they had to fight for computing power. According to Leike, it has become increasingly difficult to conduct this critical research.

Safety culture and processes have taken a back seat to "shiny products," Leike says. "We are long overdue in getting incredibly serious about the implications of AGI."

Recommendation

Sutskever's and Leike's departures fit with earlier rumors of a groundswell at OpenAI against the company's over-commercialization and rapid growth.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Jan Leike, former head of AI Alignment and Superintelligence Safety at OpenAI, recently announced his resignation following the departure of renowned AI researcher Ilya Sutskever from OpenAI
  • He is now harshly criticizing the company's safety culture. Leike is convinced that OpenAI needs to devote more computing bandwidth to preparing and securing the next generations of AI
  • He is concerned that the company is not on the right track to address the major challenges of AI safety. According to Leike, OpenAI's safety culture and processes are taking a backseat to "shiny products." It is high time to take a hard look at the implications of general artificial intelligence, Leike said.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.