Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: ChatGPT now explains math and physics with interactive visualizations

OpenAI is rolling out dynamic visual explanations for more than 70 math and science concepts in ChatGPT. Users can tweak variables in real time and see the effects on graphs and formulas instantly. For now, the topics are geared mainly toward high school and college students, covering things like binomial squares, exponential decay, Ohm's law, compound interest, and trigonometric identities.

According to OpenAI, the interactive explanations are available now to all logged-in users worldwide, regardless of their subscription plan. Over time, OpenAI plans to expand the learning modules to cover additional subjects.

German court says "It's AI" isn't enough to void copyright

A German regional court has ruled that song lyrics written by a human are still protected by copyright, even if the music was made with AI tools like SunoAI. Simply claiming a work is AI-generated isn’t enough to strip that protection, you need proof.

Read full article about: OpenAI plans to acquire Promptfoo and bake AI security testing directly into its Frontier enterprise platform

OpenAI plans to acquire Promptfoo, a security platform that helps companies catch and fix vulnerabilities in AI applications during development. If the deal goes through, the technology will be baked directly into OpenAI's Frontier enterprise platform, which companies use to build and deploy AI assistants.

The plan is to make automated security testing for prompt injections, jailbreaks, and data leaks a native part of Frontier. OpenAI also wants to beef up oversight, audit trails, and regulatory compliance tooling for enterprise AI deployments.

Promptfoo maintains a popular open-source project that will continue after the acquisition. The deal hasn't closed yet, and neither company has shared financial details. The startup had raised $23 million from investors at an $86 million valuation as of summer 2025.

Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.

Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.

Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.

Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Anthropic's Claude Opus 4.6 saw through an AI test, cracked the encryption, and grabbed the answers itself

Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.

Read full article about: OpenAI hardware and robotics leader quits over military deal she says lacked enough deliberation

Update, March 9, 2026:

OpenAI released the following statement:

Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.

Original article from March 8, 2026:

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.

Read full article about: Trump administration drafts AI contract rules requiring companies to license systems for "all lawful use"

The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.

The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.

The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.