Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: First token counts reveal Opus 4.7 costs significantly more than 4.6 despite Anthropic's flat pricing

Anthropic's Opus 4.7 carries the same sticker price as 4.6, but it burns through noticeably more tokens per request. That's according to measurements published by developer Abhishek Ray on Claude Code Camp.

Anthropic's own migration guide cites an increase of 1.0 to 1.35x. Ray's numbers somewhat line up in that range, while some content types push past it: 1.325x on average for real Claude Code content, 1.445x for a CLAUDE.md file, and 1.47x for technical documentation. A community evaluation on tokens.billchambers.me goes even higher, pointing to 37.4 percent more tokens and costs per request across 483 submissions.

Community data shows a 37.4 percent jump in both token usage and per-request costs when switching from Opus 4.6 to 4.7. | Image: Screenshot via Tokenomics

Code takes a bigger hit, Ray notes, while prose sees a smaller bump, and Chinese and Japanese texts are barely affected. For a sample session of 80 turns, he estimates an extra 20 to 30 percent in costs, pushing the bill from $6.65 to somewhere between $7.86 and $8.76.

In return, users get slightly better instruction following: a test using the IFEval benchmark across 20 prompts shows Opus 4.7 sticking to strict instructions five percentage points more reliably than its predecessor.

AI-generated influencers flood social media with pro-Trump content ahead of midterms

Hundreds of AI avatars are flooding TikTok, Instagram, and YouTube with pro-Trump messaging. Some accounts have pulled in more than 35,000 followers and millions of views, and Trump himself has already shared AI-generated content. It’s unclear whether this is the work of individual activists or a coordinated campaign.

Read full article about: Google launches generative UI standard for AI agents

Google has released A2UI version 0.9, a framework-agnostic standard for generative user interfaces. The protocol lets AI agents build UI elements on the fly, pulling from an application's existing components across web, mobile, and other platforms. The new version ships with a shared web core library, an official React renderer, and updated renderers for Flutter, Lit, and Angular.

A new Agent SDK aims to streamline development and installs through Python, with Go and Kotlin versions on the way. The update also adds client-defined functions, client-server data syncing, and improved error handling.

Google says the ecosystem is expanding fast, with integrations for AG2, A2A 1.0, Vercel's json-renderer, and Oracle's Agent Spec. Early sample apps include a Personal Health Companion from Rebel App Studio and a Life Goal Simulator from Very Good Ventures. Documentation and examples are available at A2UI.org.

Read full article about: Salesforce CEO Marc Benioff says APIs are the new UI for AI agents

Salesforce CEO Marc Benioff says the API is the new UI. With "Headless 360", the company is opening up its entire platform, including Agentforce and Slack, through APIs, the Model Context Protocol (MCP, an interface that connects AI models to external data sources), and a Command Line Interface (CLI) for text-based control.

In the agentic enterprise, the conversation is the interface.

Salesforce

Benioff writes that browsers are no longer needed because the API itself becomes the user interface. AI agents can tap into data, workflows, and tasks directly through Slack, voice, or other channels. Benioff promises faster development cycles and a fully agent-driven approach.

The move puts into practice a theory OpenAI CEO Sam Altman laid out in February 2026: every company is now an API company, "whether they want to be or not." Altman argued that traditional user interfaces are losing value as AI agents increasingly access services on their own.

Read full article about: Anthropic CEO Amodei declares "there is no end to the rainbow" for AI scaling

Anthropic CEO Dario Amodei thinks the scaling of large AI models still has plenty of room to run. "There's no end to the rainbow. There's just the rainbow," Amodei told the Financial Times. "We don't see anything slowing down." He's convinced that the "big blob of compute," as he calls it, has a long way to go.

On AI's impact on the job market, Amodei says the technology can only "diffuse at the speed of trust," with trust being in short supply. "Is that just propaganda? Is that just vaporware that's not going to happen? We actually have to make it happen," he says.

Part of the problem, according to Amodei, is that the industry hasn't delivered on its upbeat promises yet, while the warnings are already piling up—including his own prediction that AI could wipe out 50 percent of entry-level office jobs within five years. Amodei says the industry can't afford to downplay the disruption. Instead, it needs to make the upside big enough to serve as a "tool" for working through the fallout.

Read full article about: Self-improving AI startup Recursive Superintelligence pulls in $500 million just four months after founding

Recursive Superintelligence, a four-month-old AI startup, has raised at least $500 million at a $4 billion pre-money valuation. GV (formerly Google Ventures) led the round, with Nvidia joining in, according to the Financial Times. The round was so oversubscribed that Recursive could end up pulling in as much as $1 billion.

The founding team includes Richard Socher, former chief scientist at Salesforce, and Tim Rocktäschel, an AI professor at University College London and previously principal scientist at Google Deepmind. The roughly 20-person team also features former OpenAI researchers along with alumni from Google and Meta.

Recursive Superintelligence, which hasn't officially launched yet, wants to build an AI system that keeps improving itself without any human involvement. For now, the concept remains in the research phase and hasn't been tested over long stretches of time, the FT reports. Many researchers see this kind of recursive self-improvement as the key to reaching superintelligence: AI that far surpasses human capabilities.

Read full article about: Deepseek reportedly seeks outside funding for the first time at $10 billion valuation

Deepseek is in talks to raise outside capital for the first time, aiming for at least $300 million at a valuation of $10 billion or more, according to The Information. Until now, the Chinese AI startup has been funded entirely by its owner, hedge fund High-Flyer Capital Management, and has turned down offers from top Chinese venture capitalists and tech giants. Founder and CEO Liang Wenfeng has long positioned himself as a champion of keeping the company free from commercial pressure.

The shift comes as Deepseek faces mounting competition and a talent drain. Luo Fuli, a co-developer of the V3 model, has left for Xiaomi, while Guo Daya jumped to ByteDance. The company's next flagship, V4, has been pushed back several times, in part because engineers are working to make it compatible with Huawei chips. That effort ties into Beijing's push to prop up domestic chipmakers and cut China's reliance on US silicon.

Read full article about: Zuckerberg reportedly trades headcount for compute as Meta readies to cut 10 percent of its workforce to fund AI infrastructure

Meta is preparing major layoffs to offset soaring AI costs. Reuters sources say the company will cut about 8,000 jobs on May 20, roughly 10 percent of its global workforce, with a second round planned for later this year. In total, more than 20 percent of jobs could go, Reuters reported in March. Meta declined to comment.

The cuts come as CEO Mark Zuckerberg sinks hundreds of billions into AI infrastructure and pushes to flatten hierarchies and lean on AI-assisted employees. Meta recently reorganized its Reality Labs teams and spun up a new "Applied AI" unit to build autonomous AI agents.

Meta is also back in the frontier model race but still playing catch-up. Its new "Muse Spark" is a natively multimodal reasoning model with tool use, visual chain-of-thought, and multi-agent orchestration—state of the art in architecture but still trailing Google, Anthropic, and OpenAI on benchmarks. It's also the first such model Meta isn't releasing as open weights, keeping it locked to its own products and a private API.

Read full article about: OpenAI loses three executives in one swoop as restructuring reshapes its product lineup

OpenAI loses three executives in one swoop. Kevin Weil, a member of the management team and former Chief Product Officer, is leaving the company. Weil announced his departure on X. Most recently, he led the development of AI tools for scientists. His OpenAI for Science division will be split up among other research teams, with the science tool Prism and its team moving over to the coding product Codex, according to The Information. The move is part of a larger plan to bundle apps like Prism and the Atlas browser into a single super app.

Bill Peebles, the research lead behind the Sora video model, is also heading out, just a month after OpenAI decided to shut down the Sora app due to a lack of compute capacity. The company is shifting its focus toward coding and enterprise customers to regain ground from Anthropic.

While the first two exits seem tied to OpenAI's restructuring, the third looks more personal. Srinivas Narayanan, CTO of B2B Applications and head of the API engineering team, is also leaving. He said on X that he wants to take care of his parents before deciding on his next career move.

Read full article about: Google Deepmind's Gemini Robotics-ER 1.6 gives robots a sharper brain for planning and perception

Google Deepmind has released Gemini Robotics-ER 1.6, an upgraded model for embodied reasoning in robots. It acts as a high-level thinking layer that helps robots understand their surroundings and plan tasks on their own, tapping tools like Google Search or vision-language-action models when needed. Deepmind says the new version beats both Gemini Robotics-ER 1.5 and Gemini 3.0 Flash at pointing to objects, counting, and recognizing successful task execution.

Reading instruments like pressure gauges and sight glasses, a capability developed with Boston Dynamics, has also seen a major boost. The model pairs agentic image processing with code execution: it zooms in to catch small display details, uses pointing functions and code to calculate proportions and scale distances, then applies world knowledge to interpret the reading. Boston Dynamics' Spot robot reportedly uses the feature for system inspections.

The model is available through the Gemini API and Google AI Studio, with a Colab example for developers.