Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

Half of xAI's co-founders have now left Elon Musk's AI startup

Jimmy Ba is the latest co-founder to leave xAI, and like the five who left before him, he’s full of praise for the company and predicts massive AI breakthroughs ahead. Yet somehow, half of xAI’s twelve founding members have still walked out the door.

Read full article about: OpenAI's Deep Research now runs on GPT-5.2 and lets users search specific websites

OpenAI has upgraded Deep Research in ChatGPT. The feature now runs on the new GPT-5.2 model, as OpenAI announced on X. A key addition is that users can connect apps to ChatGPT and—potentially very useful—search specific websites. The search progress can also be tracked in real time, interrupted with questions, or supplemented with new sources. Results can now be displayed as full-screen reports.

Until now, Deep Research—which launched in 2025—ran on o3 and o4 mini models. OpenAI considers it the first "AI agent" in ChatGPT, since the system independently kicks off multi-stage web searches based on the user's query before generating a response.

That said, even web searches don't protect against generative AI errors, and the longer the generated text, the higher the risk of mistakes. In everyday use, targeted search queries with capable reasoning models are often more reliable. Web search significantly reduces hallucination rates overall, but doesn't eliminate them.

Read full article about: Google's AI drug discovery spinoff Isomorphic Labs claims major leap beyond AlphaFold 3

Isomorphic Labs, Google DeepMind's AI medicine startup, has unveiled a new system called "Isomorphic Labs Drug Design Engine" (IsoDDE) that it says outperforms AlphaFold 3. According to the company, IsoDDE doubles AlphaFold 3's accuracy when predicting protein-ligand structures that differ significantly from the training data (see left graph below).

IsoDDE outperforms previous methods in structure prediction, binding pocket recognition, and binding strength prediction, according to Isomorphic Labs. | Image: Isomorphic Labs

Beyond structure prediction, IsoDDE can identify previously unknown docking sites on proteins in seconds based solely on their blueprint, with accuracy that Isomorphic Labs says approaches that of lab experiments. Isomorphic Labs also claims the system can estimate how strongly a drug binds to its target at a fraction of the time and cost of traditional methods. These capabilities could uncover new starting points for active compounds and speed up computational screening.

Isomorphic Labs says it already uses IsoDDE daily in its own research programs to develop new drug candidates. Details are available in the company's technical report.

Read full article about: Anthropic's head of Safeguards Research warns of declining company values on departure

Anthropic is starting to feel the OpenAI effect. Growing commercialization and the need to raise billions of dollars is forcing the company into compromises, from accepting money from authoritarian regimes and working with the US Department of Defense and Palantir to praising Donald Trump. Now Mrinank Sharma, head of the Safeguards Research Team—the group responsible for keeping AI models safe—is leaving. In his farewell post, he suggests Anthropic has drifted away from its founding principles.

Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.

Mrinank Sharma

The Oxford-educated researcher says the time has come to move on. His departure echoes a pattern already familiar at OpenAI, which saw its own wave of safety researchers leave over concerns that the company was prioritizing revenue growth over responsible deployment. Anthropic was originally founded by former OpenAI employees who wanted to put AI safety first, making Sharma's exit all the more telling.

The new Gemini-based Google Translate can be hacked with simple words

A simple prompt injection trick can turn Google Translate into a chatbot that answers questions and even generates dangerous content, a direct consequence of Google switching the service to Gemini models in late 2025.

Read full article about: ChatGPT now shows ads to free and Go users, with opt-out cutting daily message limits

OpenAI is rolling out ads in ChatGPT for users in the United States. The test targets logged-in adult users on the free and "Go" tiers. Plus, Pro, Business, Enterprise, and Education plans remain ad-free. Free-tier users can opt out of advertising, but doing so reduces their daily message allowance.

OpenAI says the decision comes down to high infrastructure costs. The company stresses that ads don't influence ChatGPT's responses, and conversations stay private. Which ad a user sees depends on the conversation topic, previous chats, and interactions.

Users under 18 won't see any ads, and ads won't appear around sensitive topics like health or politics. Users can hide individual ads, delete their ad data, and adjust personalization settings. Advertisers get aggregated performance statistics but have no access to chat logs or personal data, OpenAI says.

What will always remain true: ChatGPT’s answers remain independent and unbiased, conversations stay private, and people keep meaningful control over their experience.

Putting ads in chatbots is controversial, since the potential for manipulation is greater than with traditional search engines. OpenAI says it will keep ads clearly separated from content. Long term, the company plans to roll out additional ad formats.

Read full article about: OpenAI says ChatGPT is growing again, plans new model this week

OpenAI CEO Sam Altman told employees in an internal Slack message that ChatGPT is once again growing by more than ten percent per month, CNBC reports. The last official number was 800 million weekly users in January 2026.

Altman also said an updated chat model for ChatGPT is set to ship this week. It could be the chat variant of GPT 5.3, which OpenAI released last week as the coding-focused version Codex. The model scores particularly well on agent coding benchmarks and is 25 percent faster, according to OpenAI.

The Codex coding product has grown roughly 50 percent in just one week, according to Altman (60% "Codex user", now confirmed via X), who called the growth "insane." It competes directly with Anthropic's popular Claude Code. OpenAI's new Codex desktop app in particular is likely to expand gradually beyond coding use cases, following a similar path to Anthropic's Cowork.

Comment Source: CNBC
Read full article about: Claude Opus 4.6 takes the top spot on Artificial Analysis Intelligence Index, but OpenAI's Codex 5.3 looms

Claude Opus 4.6 is the new top-ranked AI model, at least until Artificial Analysis finishes benchmarking OpenAI's Codex 5.3, which will likely pull ahead in coding. Anthropic's latest model leads the Artificial Analysis Intelligence Index, a composite of ten tests covering coding, agent tasks, and scientific reasoning, with first-place finishes in agent-based work tasks, terminal coding, and physics research problems.

Artificial Analysis

Running the complete test suite costs $2,486, more than the $2,304 required for GPT-5.2 at maximum reasoning performance. Opus 4.6 consumed roughly 58 million output tokens, twice as many as Opus 4.5 but significantly fewer than GPT-5.2's 130 million. The higher total price comes down to Anthropic's token pricing of $5 and $25 per million input and output tokens, respectively.

Opus 4.6 is available through the Claude.ai apps and via Anthropic's API, Google Vertex, AWS Bedrock, and Microsoft Azure.