Current language model training leaves large parts of the internet on the table
Large language models learn from web data, but which pages actually make it into training sets depends heavily on a seemingly mundane choice: the HTML extractor. Researchers at Apple, Stanford, and the University of Washington found that three common extraction tools pull surprisingly different content from the same web pages.
Anthropic says it will take the US government to court after Secretary of Defense Pete Hegseth moved to classify the AI company as a supply chain risk, a designation previously reserved for foreign adversaries. Anthropic calls the classification illegal and says it will "challenge any supply chain risk designation in court."
We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
Anthropic
Hegseth also implied military suppliers should no longer be allowed to do business with Anthropic. But according to Anthropic, there's no legal basis for that move: the classification under 10 USC 3252 only applies to the use of Claude in direct contracts with the Department of Defense. For private customers, commercial contracts, and access through the API or claude.ai, nothing would change.
OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies
OpenAI struck a deal with the Pentagon just hours after Anthropic was barred from government contracts. OpenAI claims to operate under the same safety principles as Anthropic, but the language both companies have used so far suggests differences.
Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.
Meta has signed a multi-year, multi-billion dollar contract with Google to rent its AI chips—Tensor Processing Units (TPUs)—for developing new AI models. That's according to The Information. Meta is also looking into buying TPUs outright for its own data centers starting next year.
The deal takes direct aim at Nvidia, which dominates the AI chip market and has been Meta's go-to GPU supplier for AI training. Just days earlier, Meta had announced plans to buy millions of GPUs from Nvidia and AMD. Internally, Google Cloud executives have set a goal of capturing up to ten percent of Nvidia's annual revenue—roughly $200 billion—through TPU sales. Google has also launched a joint venture with an investment firm to lease TPUs to other customers.
Here's where it gets complicated: Google itself is one of Nvidia's biggest customers, since cloud customers still expect access to GPU servers. So Google has to keep buying Nvidia's latest chips to stay competitive in the cloud market, while simultaneously trying to eat into Nvidia's market share with its own silicon. OpenAI reportedly managed to negotiate 30 percent lower prices from Nvidia simply because TPUs exist as an alternative.
A new integration links Figma's design platform directly with OpenAI's Codex. Teams can automatically generate editable Figma designs from code and convert designs into working code. It runs on the open MCP standard, supports Figma Design, Figma Make, and FigJam, and is set up in the Codex desktop app for macOS.
Until now, moving between Figma and code was mostly a one-way street. Dev Mode offered basic HTML/CSS snippets, plugins exported designs as React or HTML, and Figma Make generated React components from text input. These tools worked in isolation without understanding the full project. The new integration creates an end-to-end connection where the AI accesses code, Figma files, and the design system simultaneously.
Figma was one of the first partners with its own ChatGPT app and uses ChatGPT Enterprise internally. According to OpenAI, over one million people access Codex weekly, with usage up more than 400 percent since the start of the year.