Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.

OpenAI calls Stuart Russell a "doomer" in court after its CEO co-signed his AI extinction warning

Fear generates attention, and OpenAI usually knows how to use that. But in court, the company is trying to discredit an AI expert as a doomsday prophet, even though CEO Sam Altman spent years spreading the same warnings when they still served his own agenda.

Read full article about: OpenAI promises Canada tighter safety protocols after ChatGPT flagged a shooter's violent chats but never called police

In a letter to AI Minister Evan Solomon, OpenAI has promised the Canadian government it will tighten its safety protocols. The move follows a fatal shooting at a school in Tumbler Ridge, British Columbia, that killed eight people. The suspect, Jesse Van Rootselaar, had previously interacted with ChatGPT. An internal algorithm flagged the interactions as possible warnings of real-world violence, and OpenAI employees reviewed them. The company blocked the account but ultimately decided not to contact police.

According to the Wall Street Journal, OpenAI now plans to adopt more flexible criteria for sharing account data with authorities, establish direct lines of communication with Canadian law enforcement, and improve its systems for detecting evasion tactics. OpenAI Vice President Ann O'Leary said the account would have been reported under the new rules. Canada's Justice Minister Sean Fraser warned that new AI regulations could follow if OpenAI doesn't act quickly.

Read full article about: Anthropic calls Pentagon's supply chain risk label illegal and vows to challenge it in court

Anthropic says it will take the US government to court after Secretary of Defense Pete Hegseth moved to classify the AI company as a supply chain risk, a designation previously reserved for foreign adversaries. Anthropic calls the classification illegal and says it will "challenge any supply chain risk designation in court."

We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.

Anthropic

Hegseth also implied military suppliers should no longer be allowed to do business with Anthropic. But according to Anthropic, there's no legal basis for that move: the classification under 10 USC 3252 only applies to the use of Claude in direct contracts with the Department of Defense. For private customers, commercial contracts, and access through the API or claude.ai, nothing would change.

The conflict traces back to a failed negotiation process. Anthropic refused to release Claude for mass domestic surveillance and fully autonomous weapons systems, arguing that current AI models are too unreliable for these purposes and that mass surveillance violates fundamental rights. OpenAI has since taken over the deal.

OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies

OpenAI struck a deal with the Pentagon just hours after Anthropic was barred from government contracts. OpenAI claims to operate under the same safety principles as Anthropic, but the language both companies have used so far suggests differences.

Read full article about: Google Deepmind and OpenAI employees demand Anthropic-style red lines on Pentagon surveillance and autonomous weapons

Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Quote from the open letter "We will not be divided"

According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.

Read full article about: Meta signs multi-billion dollar deal to rent Google's TPUs in a direct challenge to Nvidia's AI chip dominance

Meta has signed a multi-year, multi-billion dollar contract with Google to rent its AI chips—Tensor Processing Units (TPUs)—for developing new AI models. That's according to The Information. Meta is also looking into buying TPUs outright for its own data centers starting next year.

The deal takes direct aim at Nvidia, which dominates the AI chip market and has been Meta's go-to GPU supplier for AI training. Just days earlier, Meta had announced plans to buy millions of GPUs from Nvidia and AMD. Internally, Google Cloud executives have set a goal of capturing up to ten percent of Nvidia's annual revenue—roughly $200 billion—through TPU sales. Google has also launched a joint venture with an investment firm to lease TPUs to other customers.

Here's where it gets complicated: Google itself is one of Nvidia's biggest customers, since cloud customers still expect access to GPU servers. So Google has to keep buying Nvidia's latest chips to stay competitive in the cloud market, while simultaneously trying to eat into Nvidia's market share with its own silicon. OpenAI reportedly managed to negotiate 30 percent lower prices from Nvidia simply because TPUs exist as an alternative.

Read full article about: Claude's Cowork desktop app now runs scheduled tasks so your AI assistant works while you sleep

Anthropic's AI assistant Claude is picking up new features in its desktop app Cowork. Users can now set up scheduled tasks that Claude handles automatically at set times, things like a morning briefing, weekly spreadsheet updates, or Friday presentations for the team.

Anthropic also points to the plugins already available that give Cowork specialized knowledge in areas like design, technology, and law. A full overview of available plugins is here. Moreover, there's a new "Customize" section in Cowork's sidebar where users can manage all their plugins, skills, and connections from one place.

Cowork is available as a research preview for macOS and Windows, open to all paying Claude subscribers. As with any agent-based AI system, there are cybersecurity considerations. It's worth being careful about which parts of your computer you give the software access to.