Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Deepseek v4 will reportedly run entirely on Huawei chips in a major win for China's AI independence push

Deepseek v4 is expected to launch in the coming weeks, and it will run entirely on Huawei chips. According to The Information, the model represents a major milestone in China's effort to break free from foreign chip dependency. Deepseek reportedly spent months working with Huawei and chip designer Cambricon to port the model to Chinese-made chips. Notably, Nvidia didn't get early access to v4—only Chinese chip companies did.

The bet on domestic hardware might already be paying off. Chinese tech companies including Alibaba, Bytedance, and Tencent have ordered hundreds of thousands of units of Huawei's new Ascend 950PR to run Deepseek v4 through their cloud services and integrate it into their own AI applications, according to five people familiar with the matter. The surge in demand pushed chip prices up by 20 percent.

Huawei says the Ascend 950PR delivers roughly 2.8 times the computing power of Nvidia's H20, though it still falls short of the H200. Huawei also continues to face production bottlenecks caused by US export controls.

Read full article about: Anthropic says Claude Code's usage drain comes down to peak-hour caps and ballooning contexts

Anthropic has looked into complaints from users who were hitting their Claude Code usage limits much faster than expected. According to Anthropic's Lydia Hallie, tighter limits during peak hours and sessions with 1-million-token contexts growing larger are the two main reasons for the problem. Hallie says Anthropic also fixed some bugs, but none of them led to incorrect billing. The company has also shipped efficiency improvements and added in-product pop-ups to keep users informed.

Hallie recommends using Sonnet 4.6 instead of Opus, since Opus burns through limits roughly twice as fast. She also suggests turning off Extended Thinking when it's not needed, starting fresh sessions instead of continuing old ones, and limiting the context window. Users who still notice unusually high usage should report it through the feedback function.

Read full article about: OpenAI shifts to usage-based pricing for Codex in ChatGPT business plans

OpenAI is switching to usage-based pricing for Codex in ChatGPT Business and Enterprise. Admins can enable free Codex access across their workspace and pay only for actual usage; no upfront licenses required. Eligible Business customers can also claim up to 500 dollars in promotional credit per workspace as part of a limited-time promotion.

The move is designed to lower the barrier for enterprise adoption, OpenAI says. Coding tools typically spread from individual developers to full teams. "This model gives organizations a simpler way to support that motion inside a managed workspace," the company writes. OpenAI is likely betting that hands-on experience will drive long-term lock-in. It's a direct shot at GitHub Copilot and Cursor, which still charge per seat.

OpenAI says over two million developers use Codex weekly, with Business and Enterprise usage growing sixfold since January. The company's biggest competitor in this space is Anthropic with Claude Code.

OpenAI decides the best way to fight critical AI coverage is to own a newsroom

OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI’s communications department. That’s as contradictory as it sounds. So what’s OpenAI really after?

Read full article about: Sakana AI launches "Ultra Deep Research" to automate weeks of strategy work

Japanese AI startup Sakana AI has unveiled "Sakana Marlin," its first product for business customers. The system works autonomously: give it a topic, and it researches on its own for up to eight hours, then delivers detailed reports and presentations. Sakana AI says the tool can produce professional strategy analyses that would normally take human teams several weeks.

Eine Übersichtsansicht eines von der KI generierten Beispiel-Dokuments. Zu sehen sind mehrere hochformatige Textseiten eines Berichts sowie darunterliegende querformatige Präsentationsfolien in japanischer Sprache. Auf dem Deckblatt sind das Logo von Sakana AI, der Name „Sakana Marlin“ sowie das Wasserzeichen „Sample output“ zu erkennen.
Sample output from "Sakana Marlin": after autonomous research, the tool creates text reports and presentation slides on a given topic (here: AI trends in the financial sector). | Image: Sakana AI

Sakana Marlin combines the company's "AI Scientist," designed to resolve contradictions, with its previously introduced "AB-MCTS" method for strategic searches. Multiple AI models work together, and longer thinking time is meant to yield better results, the company says.

The company is looking for beta testers in finance, research, and business consulting. The beta is free, but requires registration (the form is in Japanese). The biggest weakness of automated reports like these is hard-to-spot AI errors, something the startup doesn't address in its announcement.

Read full article about: Microsoft's MAI-Transcribe-1 runs 2.5x faster than its predecessor at $0.36 per audio hour

Microsoft has introduced MAI-Transcribe-1, a speech-to-text model supporting 25 languages that achieves the lowest word error rate of any model tested on the FLEURS benchmark, beating Scribe v2, Whisper-large-V3, GPT-Transcribe, and Gemini 3.1 Flash-Lite. The model is also built to handle tough recording conditions like background noise, poor audio quality, and overlapping speech, Microsoft says.

MAI-Transcribe-1 (green) leads in word error rate on the FLEURS benchmark in most of the 25 languages tested, outperforming Scribe v2, Gemini 3.1 Flash-Lite, Whisper-large-v3, and GPT-Transcribe. | Image: Microsoft

Microsoft is rolling out MAI-Transcribe-1 across Copilot Voice and Microsoft Teams. Developers can try it as a public preview through Microsoft Foundry and the Microsoft AI Playground. The model runs 2.5 times faster than Microsoft's previous Azure Fast offering and costs $0.36 per audio hour. Combined with MAI-Voice-1 and a language model, it can also power voice agents, Microsoft says.

Cohere and Mistral recently released open-source alternatives that perform at a similar level.