Ad
Skip to content
Read full article about: Trump administration drafts AI contract rules requiring companies to license systems for "all lawful use"

The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.

The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.

The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.

Read full article about: Anthropic's Claude AI uncovers over 100 security vulnerabilities in Firefox

Mozilla and Anthropic have teamed up to find more than 100 bugs in Firefox. Anthropic used its Claude AI model to scan the browser's codebase for security flaws, and the model found 14 serious vulnerabilities, 22 official security advisories (CVEs), and 90 additional bugs. All critical vulnerabilities have been patched in Firefox 148, Mozilla says.

Bar chart showing Firefox vulnerability discoveries spiking in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model.
Firefox vulnerability discoveries spiked in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model. | Image: Anthropic

Claude identified entire classes of errors that conventional automated testing methods like fuzzing had missed despite decades of use, according to Mozilla. Anthropic delivered reproducible test cases alongside its findings, making the review process significantly easier. Going forward, Mozilla plans to integrate AI-powered code analysis into its internal security workflow.

Anthropic says it picked Firefox as a testing ground because it's one of the most heavily scrutinized open-source projects in the world. The company has published a detailed technical report on its findings. Anthropic also recently shipped a dedicated cybersecurity feature for its in-house AI tool, Claude Code.

Ad
Read full article about: OpenAI offers open-source maintainers six months of free ChatGPT Pro and Codex access

OpenAI is launching a new support program for open-source developers. Core maintainers of public software projects can apply for six months of free access to ChatGPT Pro with Codex, API credits, and Codex Security. Codex Security, a new AI tool for code security checks, will be reviewed on a case-by-case basis and only granted selectively due to the capabilities of GPT-5.4, according to OpenAI.

Developers who prefer other programming tools like OpenCode, Cline, or OpenClaw can also apply. Projects that don't meet all the criteria but play an important role in the broader software ecosystem are also welcome to apply. The program builds on OpenAI's existing Codex Open Source Fund, which the company has backed with one million dollars.

Anthropic's Claude Code subscription may consume up to $5,000 in compute per month while charging the user just $200

Anthropic’s $200 Claude Code subscription could consume up to $5,000 in compute per user, according to Cursor’s internal analysis reported by Forbes. The numbers reveal just how aggressively AI companies are subsidizing their coding tools and what that could mean for prices once these tools become essential.

Ad
Read full article about: OpenAI and Oracle stop expanding their flagship data center in Texas over power supply delays

OpenAI and Oracle have decided not to expand their data center site in Abilene, Texas, beyond the planned 1.2 gigawatts. Oracle has leased eight buildings at the location for OpenAI, designed to house around 400,000 Nvidia Blackwell chips, but only two have been completed so far.

Oracle had pushed to get OpenAI into six more buildings, but both sides passed because the additional power supply wouldn't be available for at least a year. Instead of expanding the current Blackwell generation, OpenAI plans to buy Nvidia's next-generation Vera Rubin chips for a different data center. According to Bloomberg, Nvidia is now trying to get Meta to fill the vacant space, though those talks are still in the early stages.

OpenAI's compute manager Sachin Katti described the Stargate site as already one of the largest AI data center campuses in the country. "We considered expanding it further, but ultimately chose to put that additional capacity in other locations," Katti writes, adding that OpenAI is currently developing more than half a dozen sites across several US states.

Read full article about: Anthropic turns Claude Code into a background worker with local scheduled tasks

Anthropic's coding tool Claude Code now supports local, scheduled tasks through a new /loop command. Users can set up recurring jobs at fixed intervals—minutes, hours, or days—that run in the background as long as Claude Code is active and auto-delete after three days. The feature uses standard cron expressions and the local time zone. One-time natural language reminders like "remind me at 3 PM to push the release branch" are also supported, with up to 50 scheduled tasks per session.

Anthropic developer Thariq Shihipar gives the example of checking error logs every few hours, with Claude Code automatically creating pull requests for fixable bugs. The feature gets especially interesting when connected to other data sources, he says. Claude Code creator Boris Cherny adds use cases like auto-monitoring pull requests with self-fixing or generating morning Slack summaries. A detailed guide is available here.

Claude Code recently received updates adding automated desktop functions, remote control for smartphones, and memory.

Read full article about: Despite Pentagon ban, Google, AWS, and Microsoft stick with Anthropic's AI models

Google, Amazon, and Microsoft are assuring customers that Anthropic's AI technology will remain available outside of defense projects, CNBC reports, citing company spokespeople. None of the three have issued public statements.

Microsoft said its legal team reviewed the classification and concluded that Anthropic products, including Claude, can stay available through M365, GitHub, and Microsoft's AI Foundry, except for the US Department of War. Google continues offering Claude through Vertex AI, Amazon through Bedrock and GovCloud.

The reassurances come amid an ongoing dispute between Anthropic and the US Department of War, which designated Anthropic as a supply chain risk. CEO Dario Amodei called the designation legally untenable and said his company plans to fight it in court. Some defense contractors have already told employees to switch to alternatives like OpenAI.

According to the Washington Post, Claude was still used in the recent US attack on Iran despite the designation. The transition to other AI models is expected to take six months, with OpenAI among the new partners.

Comment Source: CNBC
Ad
Read full article about: Anthropic's new marketplace lets enterprise customers spend their existing AI budget on third-party tools

Anthropic is launching the Anthropic Marketplace, a storefront where corporate customers can buy third-party software built on Anthropic's AI models. Launch partners include Snowflake, Harvey, and Replit. Anthropic isn't taking a commission, and customers can put a portion of their existing annual Anthropic spend toward these tools.

First providers on the Anthropic Marketplace. | Image: Screenshot

The strategy mirrors what Amazon does with the AWS Marketplace or Microsoft with Azure: Anthropic is positioning itself as the orchestrator of a growing AI software ecosystem. By waiving commissions and letting customers tap existing budgets for third-party tools, the company makes it easy to stay in its ecosystem.

The commission waiver is less generous than it sounds, though. Every marketplace app runs on Anthropic's models, so partners already pay for the model capacity they use. Anthropic doesn't earn from the marketplace itself (yet), but it profits from the growing demand for its models that the marketplace generates. Interested partners can sign up for the waiting list.