Ad
Skip to content
Read full article about: OpenClaw developer Peter Steinberger joins OpenAI to build AI agents

Peter Steinberger, the developer behind the open-source project OpenClaw, is joining OpenAI. His focus will be on building the next generation of personal AI agents. OpenAI CEO Sam Altman called Steinberger a "genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people." Altman expects this work to quickly become a core part of OpenAI's product lineup.

OpenClaw, Steinberger's original hobby project, which blew up over the past few weeks, will "live in a foundation as an open-source project" and will be supported by OpenAI, Altman says, calling the future "extremely multi-agent."

Steinberger writes in his blog that he spoke to several large AI labs in San Francisco but ultimately chose OpenAI because they shared the same vision. Steinberger's goal: building an agent that even his mother can use. Getting there, he says, requires fundamental changes, more security research, and access to the latest models.

What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.

Peter Steinberger
Read full article about: Google and OpenAI complain about distillation attacks that clone their AI models on the cheap

Google and OpenAI are complaining about data theft—yes, you read that right. According to Google, Gemini was hit with a massive cloning attempt through distillation, with a single campaign firing over 100,000 requests at the model, NBC News reports. Google calls it intellectual property theft, pointing to companies and researchers chasing a competitive edge.

Meanwhile, OpenAI has sent a memo to the US Congress accusing DeepSeek of using disguised methods to copy American AI models. The memo also flags China's energy buildout, ten times the new electricity capacity the US added by 2025, and confirms ChatGPT is growing at around ten percent per month.

Distillation floods a model with targeted prompts to extract its internal logic, especially its "reasoning steps," then uses that knowledge to build a cheaper clone, potentially skipping billions in training costs. Google security head John Hultquist warns smaller companies running their own AI models face the same risk, particularly if those models were trained on sensitive business data.

Anthropic CEO Dario Amodei suggests OpenAI doesn't "really understand the risks they're taking"

Anthropic’s revenue has grown 10x year over year, and CEO Dario Amodei believes Nobel Prize-level AI is maybe just a year or two away. So why isn’t he going all in on compute? Because being off by even one year could mean bankruptcy, and he’s not sure his competitors have done the math.

Read full article about: OpenAI is retiring GPT-4o and three other legacy models tomorrow, likely for good

OpenAI is dropping several older AI models from ChatGPT on February 13, 2026: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The models will stick around in the API for now. The company says it comes down to usage: only 0.1 percent of users still pick GPT-4o on any given day.

There's a reason OpenAI is being so careful about GPT-4o specifically: the model has a complicated past. OpenAI already killed it once back in August 2025, only to bring it back for paying subscribers after users pushed back hard. Some people had grown genuinely attached to the model, which was known for its complacent, people-pleasing communication style. OpenAI addresses this head-on at the end of the post:

We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.

OpenAI

OpenAI points to GPT-5.1 and GPT-5.2 as improved successors that incorporate feedback from GPT-4o users. People can now tweak ChatGPT's tone and style, things like warmth and enthusiasm. But that probably won't be enough for the GPT-4o faithful.

Pentagon pushes AI companies to deploy unrestricted models on classified military networks

The Pentagon is pressing leading AI companies including OpenAI, Anthropic, Google, and xAI to make their AI tools available on classified military networks – without the usual usage restrictions.

Read full article about: OpenAI reportedly uses a "special version" of ChatGPT to hunt down internal leakers by scanning Slack and email

OpenAI uses a "special version" of ChatGPT to track down internal information leaks. That's according to a report from The Information, citing a person familiar with the matter. When a news article about internal operations surfaces, OpenAI's security team feeds the text into this custom ChatGPT version, which has access to internal documents as well as employees' Slack and email messages.

The system then suggests possible sources of the leak by identifying files or communication channels that contain the published information and showing who had access to them. It's unclear whether OpenAI has actually caught anyone using this method.

What exactly makes this version special isn't known, but there's a clue: OpenAI engineers recently presented the architecture of an internal AI agent that could serve this purpose. It's designed to let employees run complex data analysis using natural language and has access to institutional knowledge stored in Slack messages, Google Docs, and more.

OpenAI researcher quit over ads because she doesn't trust her former employer to keep its own promises

OpenAI wants to put ads in ChatGPT and former researcher Zoe Hitzig says that’s a dangerous move. She spent two years at the company and doesn’t believe OpenAI can resist the temptation to exploit its users’ most personal conversations.

Read full article about: OpenAI upgrades Responses API with features built specifically for long-running AI agents

OpenAI is adding new capabilities to its Responses API that are built specifically for long-running AI agents. The update brings three major features: server-side compression that keeps agent sessions going for hours without blowing past context limits, controlled internet access for OpenAI-hosted containers so they can install libraries and run scripts, and "skills": reusable bundles of instructions, scripts, and files that agents can pull in and execute on demand.

Skills work as a middle layer between system prompts and tools. Instead of stuffing long workflows into every prompt, developers can package them as versioned bundles that only kick in when needed. They ship as ZIP files, support versioning, and work in both hosted and local containers through the API. OpenAI recommends building skills like small command-line programs and pinning specific versions in production.