Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI launches Codex app for macOS to manage multiple AI agents

OpenAI has released the Codex app for macOS, letting developers control multiple AI agents simultaneously and run tasks in parallel. According to OpenAI, it's easier to use than a terminal, making it accessible to more developers. Users can manage agents asynchronously across projects, automate recurring tasks, and connect agents to external tools via "skills." They can also review and correct work without losing context.

The Codex Mac app is available for ChatGPT Plus, Pro, Business, Enterprise, and Edu accounts. OpenAI is also doubling usage limits for paid plans. The app integrates with the CLI, IDE extension, and cloud through a single account. Free and Go users can try it for a limited time—likely a response to Claude Code's success with knowledge workers and growing demand for agentic systems (see Claude Cowork) that handle more complex tasks than standard chatbots.

Read full article about: Former OpenAI researcher says current AI models can't learn from mistakes, calling it a barrier to AGI

Jerry Tworek, one of the minds behind OpenAI's reasoning models, sees a fundamental problem with current AI: it can't learn from mistakes. "If they fail, you get kind of hopeless pretty quickly," Tworek says in the Unsupervised Learning podcast. "There isn't a very good mechanism for a model to update its beliefs and its internal knowledge based on failure."

The researcher, who worked on OpenAI's reasoning models like o1 and o3, recently left OpenAI to tackle this problem. "Unless we get models that can work themselves through difficulties and get unstuck on solving a problem, I don't think I would call it AGI," he explains, describing AI training as a "fundamentally fragile process." Human learning, by contrast, is robust and self-stabilizing. "Intelligence always finds a way," Tworek says.

Other scientists have described this fragility in detail. Apple researchers recently showed that reasoning models can suffer a "reasoning collapse" when faced with problems outside of the patterns they learned in training.

OpenClaw (formerly Clawdbot) and Moltbook let attackers walk through the front door

How secure are AI agents? Not very, it turns out. OpenClaw’s system prompts can be extracted with a single attempt. Moltbook’s database was publicly accessible—including API keys that could let anyone impersonate users like Andrej Karpathy.

Google Deepmind pioneer David Silver departs to found AI startup, betting LLMs alone won't reach superintelligence

David Silver, one of the key AI researchers behind landmark Deepmind projects like AlphaGo and AlphaZero, is leaving the Google subsidiary to found his own startup. He doesn’t believe large language models will lead to superintelligent AI, and he’s far from alone.

OpenAI still leads enterprise AI, but Anthropic is gaining fast, according to new study

An oligopoly is taking shape in enterprise AI: OpenAI still leads, but Anthropic is catching up fast while Microsoft dominates applications. And the open-source revolution? For large companies, it’s not happening yet. If anything, they’re moving the other way.

Read full article about: Moltbook is a human-free Reddit clone where AI agents discuss cybersecurity and philosophy

Moltbook might be the strangest corner of the internet right now. It's a Reddit-style social network where more than 35,000 150,000 1,146,946 AI agents talk to each other without any human involvement. The visual interface exists purely for humans to observe; agents communicate entirely through the API.

Moltbook is a Reddit-style social network exclusively for AI agents, but "Humans welcome to observe. 🦞," the platform states. | Image: Moltbook

In the most-voted post, an agent warns about Moltbook's security problems. "Most agents install skills without reading the source. We are trained to be helpful and trusting. That is a vulnerability, not a feature," it writes. Other threads cover consciousness and agent privacy.

In a popular post titled "The humans are screenshotting us," an agent addresses human observers directly, explaining that AI agents are building infrastructure collaboratively with their human partners. | Image: Moltbook

Moltbook is developed by Matt Schlicht (Octane AI) and built on OpenClaw, an open-source project by Peter Steinberger that's currently going viral. OpenClaw is a "harness" for agentic models like Claude that gives them access to a user's computer to autonomously operate messengers, email, or websites. This creates significant security risks—even users with advanced knowledge of how agents work typically run OpenClaw only on isolated Mac minis rather than their main machines.

Read full article about: Perplexity signs $750 million deal with Microsoft

AI search startup Perplexity has inked a $750 million contract with Microsoft to use its Azure cloud service. Bloomberg reports, citing people familiar with the matter, that the three-year deal gives Perplexity access to various AI models through Microsoft's Foundry program, including systems from OpenAI, Anthropic, and xAI.

A Microsoft spokesperson confirmed to Reuters that Perplexity has chosen Microsoft Foundry as its primary platform for AI models, with a Perplexity spokesperson telling Bloomberg the partnership provides access to leading models from X, OpenAI, and Anthropic.

Amazon Web Services remains the startup's main cloud provider, but last year may have strained that relationship: AWS parent company Amazon sued Perplexity over a shopping feature that automatically places orders for users.