Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI clarifies it won't claim ownership of user discoveries following confusion over monetization plans

OpenAI researcher Kevin Weil pushes back on reports that the company plans to claim a share of discoveries made by individual users, entrepreneurs, or scientists. The clarification follows a blog post by CFO Sarah Friar outlining plans for IP licensing agreements and outcome-based pricing that would let OpenAI share in the value its tools help create.

Licensing, IP-based agreements, and outcome-based pricing will share in the value created.

Sarah Weil, via OpenAI

Weil clarified on X that Friar was referring to interest OpenAI has heard from large organizations in licensing or IP-based partnerships. The company is open to exploring creative ways to partner and align incentives, but "that's not something we're doing today." If it happens in the future, it would be a bespoke agreement with a company, "not something that would impact individual users," Weil says.

Nearly half of Microsoft's commercial contract backlog is tied to OpenAI

Microsoft posts record cloud revenue but the stock is down double digits. Investors question whether billions in AI spending will pay off, especially with nearly half the cloud backlog coming from one customer: OpenAI.

Read full article about: Nvidia, Amazon, and Microsoft could invest up to $60 billion in OpenAI

OpenAI's latest funding round might hit peak circularity. According to The Information, the AI company is in talks with Nvidia, Microsoft, and Amazon about investments totaling up to $60 billion. Nvidia could put in as much as $30 billion, Amazon more than $10 billion—possibly even north of $20 billion—and Microsoft less than $10 billion. On top of that, existing investor SoftBank could contribute up to $30 billion. If these deals go through, the funding round could reach the previously rumored $100 billion mark at a valuation of around $730 billion.

Critics will likely point out how circular these deals really are. Several potential investors, including Microsoft and Amazon, also sell servers and cloud services to OpenAI. That means a chunk of the investment money flows right back to the investors themselves. These arrangements keep the AI hype machine running without the actual financial benefits of generative AI showing up in what end users pay.

Read full article about: Allen AI's SERA brings open coding agents to private repos for as little as $400 in training costs

AI research institute Allen AI has released SERA, a family of open-source coding agents designed for easy adaptation to private code bases. The top model, SERA-32B, solves up to 54.2 percent of problems in the SWE-Bench-Test Verified coding benchmark (64K context), outperforming comparable open-source models.

Allen AI
SERA outperforms comparable open-source coding agents on the SWE-Bench-Test Verified benchmark with 32K context. | Image: Allen AI

According to AI2, training takes just 40 GPU days and costs between $400 to match previous open-source results and $12,000 for performance on par with leading industry models. This makes training on proprietary code data realistic even for small teams. SERA uses a simplified training method called "Soft-verified Generation" that doesn't require perfectly correct code examples. Technical details can be found in the blog.

The models work with Claude Code and can be launched with just two lines of code, according to Allen AI. All models, code, and instructions are available on Hugging Face under the Apache 2.0 license.

Former Tesla AI chief Andrej Karpathy now codes "mostly in English" just three months after calling AI agents useless

Just last October, Andrej Karpathy dismissed AI agents: “They just don’t work.” Now he says 80 percent of his coding is agent-based and calls it the “biggest change to my basic coding workflow in ~2 decades.” A typically measured voice is joining the agent coding hype, but with some warnings attached.

Read full article about: OpenAI reportedly launches ChatGPT ads at premium TV prices

OpenAI is charging around $60 per 1,000 impressions for its initial ChatGPT ads, far above typical online advertising rates in the low single digits and closer to what advertisers pay for premium TV spots like NFL games, according to The Information. The ads show up below ChatGPT responses in the free and lower-cost "Go" tiers.

OpenAI is also reportedly charging per impression rather than per click. Advertisers typically prefer click-based billing because it's easier to measure results. The decision to go with impressions likely reflects how AI chatbot users behave differently than traditional search users: they click on external links far less often. Perplexity uses the same approach, also charging per 1,000 impressions.

The move toward advertising—at premium prices and in a format that's less appealing to advertisers—suggests OpenAI needs to ramp up revenue quickly to justify its high valuation to investors. Sam Altman previously called ChatGPT advertising a last resort and a potential dystopia.

Read full article about: Microsoft's Maia 200 AI chip claims performance lead over Amazon and Google

Microsoft has unveiled its new AI inference chip, Maia 200. Built specifically for inference workloads, the chip delivers 30 percent better performance per dollar than current-generation chips in Microsoft's data centers, the company claims. It's manufactured using TSMC's 3-nanometer process, packs over 140 billion transistors, and features 216 GB of high-speed memory.

According to Microsoft, the Maia 200 is now the most powerful in-house chip among major cloud providers. The company claims it delivers three times the FP4 performance of Amazon's Trainium 3 while also outperforming Google's TPU v7 in FP8 calculations—though independent benchmarks have yet to verify these figures.

Microsoft
Microsoft's comparison shows the Maia 200 outperforming Amazon's Trainium 3 and Google's TPU v7 across key specifications. | Image: Microsoft

Microsoft says the chip already powers OpenAI's GPT 5.2 models and Microsoft 365 Copilot. Developers interested in trying it out can sign up for a preview of the Maia SDK. The Maia 200 is currently available in Microsoft's Iowa data center, with Arizona coming next. More technical details about the chip are available here.