Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Google and OpenAI complain about distillation attacks that clone their AI models on the cheap

Google and OpenAI are complaining about data theft—yes, you read that right. According to Google, Gemini was hit with a massive cloning attempt through distillation, with a single campaign firing over 100,000 requests at the model, NBC News reports. Google calls it intellectual property theft, pointing to companies and researchers chasing a competitive edge.

Meanwhile, OpenAI has sent a memo to the US Congress accusing DeepSeek of using disguised methods to copy American AI models. The memo also flags China's energy buildout, ten times the new electricity capacity the US added by 2025, and confirms ChatGPT is growing at around ten percent per month.

Distillation floods a model with targeted prompts to extract its internal logic, especially its "reasoning steps," then uses that knowledge to build a cheaper clone, potentially skipping billions in training costs. Google security head John Hultquist warns smaller companies running their own AI models face the same risk, particularly if those models were trained on sensitive business data.

Anthropic CEO Dario Amodei suggests OpenAI doesn't "really understand the risks they're taking"

Anthropic’s revenue has grown 10x year over year, and CEO Dario Amodei believes Nobel Prize-level AI is maybe just a year or two away. So why isn’t he going all in on compute? Because being off by even one year could mean bankruptcy, and he’s not sure his competitors have done the math.

Google's WebMCP moves the web closer to becoming a structured database for AI agents

In the future, AI agents won’t just search the web; they’ll browse it, shop on it, and complete tasks on their own. At least that’s Big AI’s vision, and Google’s WebMCP wants to turn websites into standardized interfaces for these agents. For website operators who depend on human visitors, that could be a serious problem.

Read full article about: xAI's founder exodus reportedly tied to safety concerns and frustration over Grok's failure to catch up

xAI has lost half of its founders in recent weeks and months. Elon Musk said on X that some departures were part of a restructuring where "unfortunately we had to part with some people" to "improve speed of execution."

But former employees tell a different story. One ex-employee told The Verge that many people at the company had grown disillusioned with Grok's focus on NSFW content and its lack of safety standards. A second former employee backed that up: "There is zero safety whatsoever in the company." According to the source, Musk deliberately pushed to make the model less restricted, viewing safety measures as censorship. Among other things, Grok had generated sexualized images of children.

You survive by shutting up and doing what Elon wants.

Another common complaint is that xAI is "stuck in the catch-up phase" without shipping anything fundamentally new compared to OpenAI or Anthropic, even though they're all trying to do the same thing anyway. Several people who left are now using money from the SpaceX merger to start their own companies, including AI infrastructure startup Nuraline.

Anthropic recruits ex-Google data center veterans to build its own AI infrastructure empire

Anthropic is discussing building at least 10 gigawatts of data center capacity worth hundreds of billions of dollars, recruiting ex-Google managers and lining up Google as a financial backer to make it happen.

Read full article about: OpenAI is retiring GPT-4o and three other legacy models tomorrow, likely for good

OpenAI is dropping several older AI models from ChatGPT on February 13, 2026: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The models will stick around in the API for now. The company says it comes down to usage: only 0.1 percent of users still pick GPT-4o on any given day.

There's a reason OpenAI is being so careful about GPT-4o specifically: the model has a complicated past. OpenAI already killed it once back in August 2025, only to bring it back for paying subscribers after users pushed back hard. Some people had grown genuinely attached to the model, which was known for its complacent, people-pleasing communication style. OpenAI addresses this head-on at the end of the post:

We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.

OpenAI

OpenAI points to GPT-5.1 and GPT-5.2 as improved successors that incorporate feedback from GPT-4o users. People can now tweak ChatGPT's tone and style, things like warmth and enthusiasm. But that probably won't be enough for the GPT-4o faithful.

Read full article about: Google Deepmind upgrades Gemini 3 Deep Think for complex science and engineering tasks

Google Deepmind has upgraded its specialized thinking mode "Gemini 3 Deep Think" and made it available through the Gemini app and as an API via a Vertex AI early access program. The upgrade targets complex tasks in science, research, and engineering.

Google AI Ultra subscribers can access Deep Think through the Gemini app, while developers and researchers can sign up separately for the API program. According to Google Deepmind, the model tops several major benchmarks:

Benchmark Deep Think Claude Opus 4.6 GPT-5.2 Gemini 3 Pro Preview
ARC-AGI-2 (Logical reasoning) 84.6% 68.8% 52.9% 31.1%
Humanity's Last Exam (Academic reasoning) 48.4% 40.0% 34.5% 37.5%
MMMU-Pro (Multimodal reasoning) 81.5% 73.9% 79.5% 81.0%
Codeforces (Coding/algorithms, Elo) 3,455 2,352 - 2,512

While Deep Think dominates in logic and coding, the gap narrows significantly on MMMU-Pro: it scored 81.5 percent, barely ahead of Gemini 3 Pro Preview at 81.0 percent. This suggests the thinking upgrades focus heavily on abstract reasoning rather than visual processing. Deep Think also achieved gold medal-level results at the 2025 Physics and Chemistry Olympiads. Examples of the model in scientific use can be found here.