The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.
The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.
Mozilla and Anthropic have teamed up to find more than 100 bugs in Firefox. Anthropic used its Claude AI model to scan the browser's codebase for security flaws, and the model found 14 serious vulnerabilities, 22 official security advisories (CVEs), and 90 additional bugs. All critical vulnerabilities have been patched in Firefox 148, Mozilla says.
Firefox vulnerability discoveries spiked in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model. | Image: Anthropic
Claude identified entire classes of errors that conventional automated testing methods like fuzzing had missed despite decades of use, according to Mozilla. Anthropic delivered reproducible test cases alongside its findings, making the review process significantly easier. Going forward, Mozilla plans to integrate AI-powered code analysis into its internal security workflow.
OpenAI is launching a new support program for open-source developers. Core maintainers of public software projects can apply for six months of free access to ChatGPT Pro with Codex, API credits, and Codex Security. Codex Security, a new AI tool for code security checks, will be reviewed on a case-by-case basis and only granted selectively due to the capabilities of GPT-5.4, according to OpenAI.
Developers who prefer other programming tools like OpenCode, Cline, or OpenClaw can also apply. Projects that don't meet all the criteria but play an important role in the broader software ecosystem are also welcome to apply. The program builds on OpenAI's existing Codex Open Source Fund, which the company has backed with one million dollars.
Anthropic's Claude Code subscription may consume up to $5,000 in compute per month while charging the user just $200
Anthropic’s $200 Claude Code subscription could consume up to $5,000 in compute per user, according to Cursor’s internal analysis reported by Forbes. The numbers reveal just how aggressively AI companies are subsidizing their coding tools and what that could mean for prices once these tools become essential.
OpenAI and Oracle have decided not to expand their data center site in Abilene, Texas, beyond the planned 1.2 gigawatts. Oracle has leased eight buildings at the location for OpenAI, designed to house around 400,000 Nvidia Blackwell chips, but only two have been completed so far.
Oracle had pushed to get OpenAI into six more buildings, but both sides passed because the additional power supply wouldn't be available for at least a year. Instead of expanding the current Blackwell generation, OpenAI plans to buy Nvidia's next-generation Vera Rubin chips for a different data center. According to Bloomberg, Nvidia is now trying to get Meta to fill the vacant space, though those talks are still in the early stages.
OpenAI's compute manager Sachin Katti described the Stargate site as already one of the largest AI data center campuses in the country. "We considered expanding it further, but ultimately chose to put that additional capacity in other locations," Katti writes, adding that OpenAI is currently developing more than half a dozen sites across several US states.
Anthropic's coding tool Claude Code now supports local, scheduled tasks through a new/loopcommand. Users can set up recurring jobs at fixed intervals—minutes, hours, or days—that run in the background as long as Claude Code is active and auto-delete after three days. The feature uses standard cron expressions and the local time zone. One-time natural language reminders like "remind me at 3 PM to push the release branch" are also supported, with up to 50 scheduled tasks per session.
Anthropic developer Thariq Shihipar gives the example of checking error logs every few hours, with Claude Code automatically creating pull requests for fixable bugs. The feature gets especially interesting when connected to other data sources, he says. Claude Code creator Boris Cherny adds use cases like auto-monitoring pull requests with self-fixing or generating morning Slack summaries. A detailed guide is available here.
Google, Amazon, and Microsoft are assuring customers that Anthropic's AI technology will remain available outside of defense projects,CNBC reports, citing company spokespeople. None of the three have issued public statements.
Microsoft said its legal team reviewed the classification and concluded that Anthropic products, including Claude, can stay available through M365, GitHub, and Microsoft's AI Foundry, except for the US Department of War. Google continues offering Claude through Vertex AI, Amazon through Bedrock and GovCloud.
According to the Washington Post, Claude was still used in the recent US attack on Iran despite the designation. The transition to other AI models is expected to take six months, with OpenAI among the new partners.