Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.

Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.

Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.

Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Anthropic's Claude Opus 4.6 saw through an AI test, cracked the encryption, and grabbed the answers itself

Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.

Read full article about: OpenAI hardware and robotics leader quits over military deal she says lacked enough deliberation

Update, March 9, 2026:

OpenAI released the following statement:

Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.

Original article from March 8, 2026:

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.

Read full article about: Trump administration drafts AI contract rules requiring companies to license systems for "all lawful use"

The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.

The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.

The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.

Read full article about: Anthropic's Claude AI uncovers over 100 security vulnerabilities in Firefox

Mozilla and Anthropic have teamed up to find more than 100 bugs in Firefox. Anthropic used its Claude AI model to scan the browser's codebase for security flaws, and the model found 14 serious vulnerabilities, 22 official security advisories (CVEs), and 90 additional bugs. All critical vulnerabilities have been patched in Firefox 148, Mozilla says.

Bar chart showing Firefox vulnerability discoveries spiking in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model.
Firefox vulnerability discoveries spiked in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model. | Image: Anthropic

Claude identified entire classes of errors that conventional automated testing methods like fuzzing had missed despite decades of use, according to Mozilla. Anthropic delivered reproducible test cases alongside its findings, making the review process significantly easier. Going forward, Mozilla plans to integrate AI-powered code analysis into its internal security workflow.

Anthropic says it picked Firefox as a testing ground because it's one of the most heavily scrutinized open-source projects in the world. The company has published a detailed technical report on its findings. Anthropic also recently shipped a dedicated cybersecurity feature for its in-house AI tool, Claude Code.

Read full article about: OpenAI offers open-source maintainers six months of free ChatGPT Pro and Codex access

OpenAI is launching a new support program for open-source developers. Core maintainers of public software projects can apply for six months of free access to ChatGPT Pro with Codex, API credits, and Codex Security. Codex Security, a new AI tool for code security checks, will be reviewed on a case-by-case basis and only granted selectively due to the capabilities of GPT-5.4, according to OpenAI.

Developers who prefer other programming tools like OpenCode, Cline, or OpenClaw can also apply. Projects that don't meet all the criteria but play an important role in the broader software ecosystem are also welcome to apply. The program builds on OpenAI's existing Codex Open Source Fund, which the company has backed with one million dollars.

Anthropic's Claude Code subscription may consume up to $5,000 in compute per month while charging the user just $200

Anthropic’s $200 Claude Code subscription could consume up to $5,000 in compute per user, according to Cursor’s internal analysis reported by Forbes. The numbers reveal just how aggressively AI companies are subsidizing their coding tools and what that could mean for prices once these tools become essential.