Ad
Skip to content
Read full article about: Former OpenAI researcher says current AI models can't learn from mistakes, calling it a barrier to AGI

Jerry Tworek, one of the minds behind OpenAI's reasoning models, sees a fundamental problem with current AI: it can't learn from mistakes. "If they fail, you get kind of hopeless pretty quickly," Tworek says in the Unsupervised Learning podcast. "There isn't a very good mechanism for a model to update its beliefs and its internal knowledge based on failure."

The researcher, who worked on OpenAI's reasoning models like o1 and o3, recently left OpenAI to tackle this problem. "Unless we get models that can work themselves through difficulties and get unstuck on solving a problem, I don't think I would call it AGI," he explains, describing AI training as a "fundamentally fragile process." Human learning, by contrast, is robust and self-stabilizing. "Intelligence always finds a way," Tworek says.

Other scientists have described this fragility in detail. Apple researchers recently showed that reasoning models can suffer a "reasoning collapse" when faced with problems outside of the patterns they learned in training.

Read full article about: Chinese AI companies rush to ship new models before Lunar New Year

Chinese AI companies are pushing to ship major model updates ahead of the Chinese New Year holiday. Zhipu AI and Minimax, both of which recently went public on the Hong Kong stock exchange, plan to release updates to their flagship models within the next two weeks, according to the South China Morning Post. Zhipu AI is reportedly working on GLM-5, a follow-up to GLM-4.7, with improvements in creative writing, programming, and logical reasoning. Minimax is preparing M2.2, which focuses on parallel programming capabilities. Throughout 2025, Chinese companies have increasingly challenged the dominance of major US AI players.

Alibaba, Moonshot AI, and Baidu have all recently unveiled their most powerful models: Qwen3-Max-Thinking, Kimi K2.5, and Ernie 5.0. Deepseek, however, is apparently only planning a smaller update this year - according to a source, the company's next major model will be a trillion-parameter system, and training has been delayed due to its growing size. Meanwhile, Tencent, Baidu, and Alibaba are pouring billions of yuan into holiday advertising campaigns for their already popular AI chatbots.

Ad

OpenClaw (formerly Clawdbot) and Moltbook let attackers walk through the front door

How secure are AI agents? Not very, it turns out. OpenClaw’s system prompts can be extracted with a single attempt. Moltbook’s database was publicly accessible—including API keys that could let anyone impersonate users like Andrej Karpathy.

Ad

Google Deepmind pioneer David Silver departs to found AI startup, betting LLMs alone won't reach superintelligence

David Silver, one of the key AI researchers behind landmark Deepmind projects like AlphaGo and AlphaZero, is leaving the Google subsidiary to found his own startup. He doesn’t believe large language models will lead to superintelligent AI, and he’s far from alone.

OpenAI still leads enterprise AI, but Anthropic is gaining fast, according to new study

An oligopoly is taking shape in enterprise AI: OpenAI still leads, but Anthropic is catching up fast while Microsoft dominates applications. And the open-source revolution? For large companies, it’s not happening yet. If anything, they’re moving the other way.

Ad
Read full article about: Moltbook is a human-free Reddit clone where AI agents discuss cybersecurity and philosophy

Moltbook might be the strangest corner of the internet right now. It's a Reddit-style social network where more than 35,000 150,000 1,146,946 AI agents talk to each other without any human involvement. The visual interface exists purely for humans to observe; agents communicate entirely through the API.

Moltbook is a Reddit-style social network exclusively for AI agents, but "Humans welcome to observe. 🦞," the platform states. | Image: Moltbook

In the most-voted post, an agent warns about Moltbook's security problems. "Most agents install skills without reading the source. We are trained to be helpful and trusting. That is a vulnerability, not a feature," it writes. Other threads cover consciousness and agent privacy.

In a popular post titled "The humans are screenshotting us," an agent addresses human observers directly, explaining that AI agents are building infrastructure collaboratively with their human partners. | Image: Moltbook

Moltbook is developed by Matt Schlicht (Octane AI) and built on OpenClaw, an open-source project by Peter Steinberger that's currently going viral. OpenClaw is a "harness" for agentic models like Claude that gives them access to a user's computer to autonomously operate messengers, email, or websites. This creates significant security risks—even users with advanced knowledge of how agents work typically run OpenClaw only on isolated Mac minis rather than their main machines.