Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.
Meta has signed a multi-year, multi-billion dollar contract with Google to rent its AI chips—Tensor Processing Units (TPUs)—for developing new AI models. That's according to The Information. Meta is also looking into buying TPUs outright for its own data centers starting next year.
The deal takes direct aim at Nvidia, which dominates the AI chip market and has been Meta's go-to GPU supplier for AI training. Just days earlier, Meta had announced plans to buy millions of GPUs from Nvidia and AMD. Internally, Google Cloud executives have set a goal of capturing up to ten percent of Nvidia's annual revenue—roughly $200 billion—through TPU sales. Google has also launched a joint venture with an investment firm to lease TPUs to other customers.
Here's where it gets complicated: Google itself is one of Nvidia's biggest customers, since cloud customers still expect access to GPU servers. So Google has to keep buying Nvidia's latest chips to stay competitive in the cloud market, while simultaneously trying to eat into Nvidia's market share with its own silicon. OpenAI reportedly managed to negotiate 30 percent lower prices from Nvidia simply because TPUs exist as an alternative.
Block cuts nearly half its workforce as Dorsey credits AI, but the real reasons predate the hype
Jack Dorsey blames AI for cutting nearly half of Block’s workforce. But a closer look at the company’s history of overhiring and structural problems tells a different story.
Anthropic's AI assistant Claude is picking up new features in its desktop app Cowork. Users can now set up scheduled tasks that Claude handles automatically at set times, things like a morning briefing, weekly spreadsheet updates, or Friday presentations for the team.
Anthropic has acquired AI startup Vercept to boost Claude's computer use capabilities. Vercept built AI that works directly on a user's machine, understands screen content, and executes tasks. Founders Kiana Ehsani, Luca Weihs, and Ross Girshick are joining Anthropic with their team. The acquisition price hasn't been disclosed.
Vercept solves perception and interaction problems central to AI-driven computer use, according to Anthropic. The technology lets an AI model read and operate human-designed interfaces from screenshots without needing a dedicated programming interface (API).
Claude already handles multi-step tasks in running applications. With the recently released Sonnet 4.6 model, Claude scores 72.5 percent on OSWorld—a benchmark that measures how well AI models complete real-world computer tasks—up from less than 15 percent at the end of 2024. The Vercept team could push that number even higher.
Suno investor admits she ditched Spotify for AI music, accidentally undermining the company's fair use defense
Suno investor C.C. Gong told X she barely uses Spotify anymore, accidentally undermining the company’s fair use defense and handing the music industry a powerful argument in its lawsuit against the AI music startup.
Andrej Karpathy, former AI developer at Tesla and OpenAI, says programming with AI agents has changed fundamentally over the past two months. According to Karpathy, AI agents barely worked before December 2026, but since then they've become reliable, thanks to higher model quality and the ability to stay on task for longer stretches.
As an example, he describes how an AI agent independently built a video analysis dashboard over a weekend: he typed the task in plain English, the agent worked for 30 minutes, solved problems on its own, and delivered a finished result. Three months ago, that would have been an entire weekend project, Karpathy says.
As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel.
Karpathy via X
Karpathy also points out that these systems aren't perfect and still need human "high-level direction, judgement, taste, oversight, iteration, and hints and ideas." What makes his take especially notable is how recently he held the opposite view. As late as October 2025, he called the hype around AI agents exaggerated, saying the products were far from ready for real-world use. He fundamentally changed that opinion after the release of Opus 4.5 and Codex 5.2 in the winter and is now doubling down on it.