OpenAI CEO Altman admits he broke his own AI security rule after just two hours, says we're all about to YOLO
OpenAI CEO Sam Altman warns that convenience is leading us to give AI agents too much control without the necessary security infrastructure in place. He admits he broke his own resolution not to give OpenAI's Codex model full access after just two hours.
"The general worry I have is that the power and convenience of these are so high and the failures when they happen are maybe catastrophic, but the rates are so low that we are going to kind of slide into this like 'you know what, YOLO and hopefully it'll be okay,'" OpenAI CEO Sam Altman said during a Q&A session with developers.
Altman admitted that despite his initial skepticism, he quickly gave AI agents full access to his computer because "the agent seems to really do reasonable things." He expects other users are doing the same. His worry is that this convenience will cause society to "sleepwalk" into a crisis where we trust complex models without building the necessary security infrastructure first.
As models become more capable, security gaps could emerge or alignment problems could go unnoticed for weeks or months. The "big picture security infrastructure" simply doesn't exist yet, which Altman suggested would make a great startup idea.
An OpenAI developer had previously written on X that he only lets AI write his code. He assumes companies will soon operate the same way and lose control of their codebases. This could create serious security problems, though he believes they'll eventually get solved.
OpenAI plans slower hiring and GPT-5 traded writing quality for reasoning power
On the company side, OpenAI is planning to slow down workforce growth for the first time. Altman says the company expects to accomplish much more with fewer people, and OpenAI doesn't want to hire aggressively only to realize AI can handle a lot of the work, then face uncomfortable conversations. Critics might note that Altman has found a convenient AI-friendly narrative to rein in exploding personnel costs.
Altman also acknowledged that GPT-5 represents a step back from GPT-4.5 when it comes to editorial or literary writing. Since the introduction of reasoning models, the focus has shifted toward logic and code, Altman explains. But the future lies with strong general-purpose models, and even a model built primarily for coding should write elegantly, he said.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now