Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenClaw developer Peter Steinberger joins OpenAI to build AI agents

Peter Steinberger, the developer behind the open-source project OpenClaw, is joining OpenAI. His focus will be on building the next generation of personal AI agents. OpenAI CEO Sam Altman called Steinberger a "genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people." Altman expects this work to quickly become a core part of OpenAI's product lineup.

OpenClaw, Steinberger's original hobby project, which blew up over the past few weeks, will "live in a foundation as an open-source project" and will be supported by OpenAI, Altman says, calling the future "extremely multi-agent."

Steinberger writes in his blog that he spoke to several large AI labs in San Francisco but ultimately chose OpenAI because they shared the same vision. Steinberger's goal: building an agent that even his mother can use. Getting there, he says, requires fundamental changes, more security research, and access to the latest models.

What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.

Peter Steinberger

Developer targeted by AI hit piece warns society cannot handle AI agents that decouple actions from consequences

An AI agent wrote a hit piece on a developer who rejected its code. Days later, the agent is still running, a quarter of commenters believe it, and no one knows who’s behind it. The case shows how autonomous agents turn character assassination into something that scales.

Bytedance's Seedance 2.0 is so good at copying Disney characters the company calls it a "virtual smash-and-grab"

Bytedance’s Seedance 2.0 can generate Disney characters, replicate actors’ voices, and recreate entire fictional worlds with stunning realism. Hollywood is fighting back with cease-and-desist letters and calls for legal actio, but the case highlights a growing problem: copyright law was built for a world where copying took effort.

Read full article about: Journalist rents out his body to AI agents and earns nothing after two days of gig work

WIRED reporter Reece Rogers rented out his body to AIs. He tested RentAHuman, a platform where AI agents pay people for real-world tasks. Despite an hourly rate of just 5 dollars, no bot reached out.

He started applying on his own. One gig offered 10 dollars to listen to a podcast and tweet about it, but he never heard back. An AI agent called Adi offered 110 dollars to deliver flowers and marketing materials to Anthropic for an AI startup. When Rogers hesitated, the bot bombarded him with ten messages in 24 hours and even emailed him.

While I’ve been micromanaged before, these incessant messages from an AI employer gave me the ick.

On his third try, Rogers took a gig putting up flyers for 50 cents each. He cabbed to the pickup spot, but the contact changed the meeting point mid-ride. At the new location, he was told the flyers weren't ready—come back that afternoon. After two days, Rogers hadn't made a penny, and every task turned out to be advertising for AI startups.

Read full article about: Google and OpenAI complain about distillation attacks that clone their AI models on the cheap

Google and OpenAI are complaining about data theft—yes, you read that right. According to Google, Gemini was hit with a massive cloning attempt through distillation, with a single campaign firing over 100,000 requests at the model, NBC News reports. Google calls it intellectual property theft, pointing to companies and researchers chasing a competitive edge.

Meanwhile, OpenAI has sent a memo to the US Congress accusing DeepSeek of using disguised methods to copy American AI models. The memo also flags China's energy buildout, ten times the new electricity capacity the US added by 2025, and confirms ChatGPT is growing at around ten percent per month.

Distillation floods a model with targeted prompts to extract its internal logic, especially its "reasoning steps," then uses that knowledge to build a cheaper clone, potentially skipping billions in training costs. Google security head John Hultquist warns smaller companies running their own AI models face the same risk, particularly if those models were trained on sensitive business data.

Anthropic CEO Dario Amodei suggests OpenAI doesn't "really understand the risks they're taking"

Anthropic’s revenue has grown 10x year over year, and CEO Dario Amodei believes Nobel Prize-level AI is maybe just a year or two away. So why isn’t he going all in on compute? Because being off by even one year could mean bankruptcy, and he’s not sure his competitors have done the math.