Ad
Skip to content
Read full article about: Google Deepmind and OpenAI employees demand Anthropic-style red lines on Pentagon surveillance and autonomous weapons

Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Quote from the open letter "We will not be divided"

According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.

Read full article about: Meta signs multi-billion dollar deal to rent Google's TPUs in a direct challenge to Nvidia's AI chip dominance

Meta has signed a multi-year, multi-billion dollar contract with Google to rent its AI chips—Tensor Processing Units (TPUs)—for developing new AI models. That's according to The Information. Meta is also looking into buying TPUs outright for its own data centers starting next year.

The deal takes direct aim at Nvidia, which dominates the AI chip market and has been Meta's go-to GPU supplier for AI training. Just days earlier, Meta had announced plans to buy millions of GPUs from Nvidia and AMD. Internally, Google Cloud executives have set a goal of capturing up to ten percent of Nvidia's annual revenue—roughly $200 billion—through TPU sales. Google has also launched a joint venture with an investment firm to lease TPUs to other customers.

Here's where it gets complicated: Google itself is one of Nvidia's biggest customers, since cloud customers still expect access to GPU servers. So Google has to keep buying Nvidia's latest chips to stay competitive in the cloud market, while simultaneously trying to eat into Nvidia's market share with its own silicon. OpenAI reportedly managed to negotiate 30 percent lower prices from Nvidia simply because TPUs exist as an alternative.

Ad
Read full article about: Figma and OpenAI connect design and code through new Codex integration

A new integration links Figma's design platform directly with OpenAI's Codex. Teams can automatically generate editable Figma designs from code and convert designs into working code. It runs on the open MCP standard, supports Figma Design, Figma Make, and FigJam, and is set up in the Codex desktop app for macOS.

Until now, moving between Figma and code was mostly a one-way street. Dev Mode offered basic HTML/CSS snippets, plugins exported designs as React or HTML, and Figma Make generated React components from text input. These tools worked in isolation without understanding the full project. The new integration creates an end-to-end connection where the AI accesses code, Figma files, and the design system simultaneously.

Figma was one of the first partners with its own ChatGPT app and uses ChatGPT Enterprise internally. According to OpenAI, over one million people access Codex weekly, with usage up more than 400 percent since the start of the year.

Read full article about: Claude Code now remembers your fixes, your preferences, and your project quirks on its own

Claude Code now remembers what it learns across sessions - automatically tracking debugging patterns, project context, and preferred working methods without manual input. Previously, users had to log this information themselves or use /init to populate CLAUDE.md files. The new auto-memory function builds on that that: Claude creates a MEMORY.md file per project, stores its findings, and pulls them up automatically in later sessions. Work through a tricky debugging problem once, and you won't have to explain the fix again. Users can also explicitly ask Claude to save specific information. The feature is on by default and can be disabled via /memory, the settings file, or an environment variable.

Another recent update: locally running sessions can now be continued on the go via smartphone, tablet, or browser at claude.ai/code - without data migrating to the cloud.

Ad
Read full article about: Claude's Cowork desktop app now runs scheduled tasks so your AI assistant works while you sleep

Anthropic's AI assistant Claude is picking up new features in its desktop app Cowork. Users can now set up scheduled tasks that Claude handles automatically at set times, things like a morning briefing, weekly spreadsheet updates, or Friday presentations for the team.

Anthropic also points to the plugins already available that give Cowork specialized knowledge in areas like design, technology, and law. A full overview of available plugins is here. Moreover, there's a new "Customize" section in Cowork's sidebar where users can manage all their plugins, skills, and connections from one place.

Cowork is available as a research preview for macOS and Windows, open to all paying Claude subscribers. As with any agent-based AI system, there are cybersecurity considerations. It's worth being careful about which parts of your computer you give the software access to.

Read full article about: Anthropic acquires Vercept to give Claude sharper eyes for reading and controlling computer screens

Anthropic has acquired AI startup Vercept to boost Claude's computer use capabilities. Vercept built AI that works directly on a user's machine, understands screen content, and executes tasks. Founders Kiana Ehsani, Luca Weihs, and Ross Girshick are joining Anthropic with their team. The acquisition price hasn't been disclosed.

Vercept solves perception and interaction problems central to AI-driven computer use, according to Anthropic. The technology lets an AI model read and operate human-designed interfaces from screenshots without needing a dedicated programming interface (API).

Vercept will shut down its desktop AI agent "Vy" in the coming weeks. What likely caught Anthropic's attention is the startup's "VyUI" interface recognition model, which reportedly outperformed comparable OpenAI technology in benchmarks.

Benchmark (UI element identification / grounding) VyUI accuracy OpenAI model
ScreenSpot v1 92% 18.3%
ScreenSpot v2 94.7% 87.9%
GroundUI Web 84.8% 82.3%

Claude already handles multi-step tasks in running applications. With the recently released Sonnet 4.6 model, Claude scores 72.5 percent on OSWorld—a benchmark that measures how well AI models complete real-world computer tasks—up from less than 15 percent at the end of 2024. The Vercept team could push that number even higher.

Ad

Suno investor admits she ditched Spotify for AI music, accidentally undermining the company's fair use defense

Suno investor C.C. Gong told X she barely uses Spotify anymore, accidentally undermining the company’s fair use defense and handing the music industry a powerful argument in its lawsuit against the AI music startup.