Ad
Skip to content
Read full article about: Anthropic's Labs team gets a shake-up as Instagram co-founder Mike Krieger joins experimental AI unit

Anthropic is growing its Labs team, which builds experimental Claude AI products. Instagram co-founder Mike Krieger, formerly Anthropic's Chief Product Officer, is moving to Labs to work with Ben Mann. Ami Vora, who joined in late 2025, will lead product development alongside CTO Rahul Patil.

According to President Daniela Amodei, Labs gives Anthropic room to experiment. The team has already shipped several hits: Claude Code became a billion-dollar product in six months, and the Model Context Protocol (MCP) now sees 100 million monthly downloads as the industry standard for connecting AI with tools and data. Cowork, which brings Claude Code capabilities to office work, was built in Labs in just 1.5 weeks. Skills and Claude in Chrome also came out of the lab.

Google's MedGemma 1.5 brings 3D CT and MRI analysis to open-source medical AI

Google has updated its open-source medical AI with MedGemma 1.5, a model capable of analyzing 3D medical scans like CTs and MRIs. The release also includes a specialized speech tool that reportedly outperforms OpenAI’s Whisper in medical dictation tasks, though strict licensing conditions apply for clinical use for both models.

Read full article about: AI models don't have a unified "self" - and that's not a bug

Expecting internal coherence from language models means asking the wrong question, according to an Anthropic researcher.

"Why does page five of a book say that the best food is pizza and page 17 says the best food is pasta? What does the book really think? And you're like: 'It's a book!'", explains Josh Batson, research scientist at Anthropic, in MIT Technology Review.

The analogy comes from experiments on how AI models process facts internally. Anthropic discovered that Claude uses different mechanisms to know that bananas are yellow versus confirming that the statement "Bananas are yellow" is true. These mechanisms aren't connected to each other. When a model gives contradictory answers, it's drawing on different parts of itself - without any central authority coordinating them. "It might be like, you're talking to Claude and then it wanders off," says Batson. "And now you're not talking to Claude but something else."

The takeaway: Assuming language models have mental coherence like humans might be a fundamental category error.

Comment Source: MIT

UK startup turns planetary biodiversity into AI-generated drug candidates

UK company Basecamp Research has developed AI models together with researchers from Nvidia and Microsoft that generate potential new therapies against cancer and multidrug-resistant bacteria from a database of over one million species.

Web world models could give AI agents consistent environments to explore

Researchers at Princeton University, UCLA, and the University of Pennsylvania have developed an approach that gives AI agents persistent worlds to explore. Standard web code defines the rules, while a language model fills these worlds with stories and descriptions.

New Deepseek technique balances signal flow and learning capacity in large AI models

DeepSeek researchers have developed a technique that makes training large language models more stable. The approach uses mathematical constraints to solve a well-known problem with expanded network architectures.

AI benchmarks are broken and the industry keeps using them anyway, study finds

Benchmarks are supposed to measure AI model performance objectively. But according to an analysis by Epoch AI, results depend heavily on how the test is run. The research organization identifies numerous variables that are rarely disclosed but significantly affect outcomes.

Researchers extract up to 96% of Harry Potter word-for-word from leading AI models

Harry Potter, Game of Thrones, 1984: researchers pulled nearly complete books from commercial language models. Two of the four systems tested didn’t even put up a fight. The findings could shape ongoing copyright lawsuits against AI companies.