Ad
Skip to content
Read full article about: OpenAI plans to acquire Promptfoo and bake AI security testing directly into its Frontier enterprise platform

OpenAI plans to acquire Promptfoo, a security platform that helps companies catch and fix vulnerabilities in AI applications during development. If the deal goes through, the technology will be baked directly into OpenAI's Frontier enterprise platform, which companies use to build and deploy AI assistants.

The plan is to make automated security testing for prompt injections, jailbreaks, and data leaks a native part of Frontier. OpenAI also wants to beef up oversight, audit trails, and regulatory compliance tooling for enterprise AI deployments.

Promptfoo maintains a popular open-source project that will continue after the acquisition. The deal hasn't closed yet, and neither company has shared financial details. The startup had raised $23 million from investors at an $86 million valuation as of summer 2025.

Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.

Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.

Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.

Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits

Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.

A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.

"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."

A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.

Comment Source: FT
Read full article about: OpenAI hardware and robotics leader quits over military deal she says lacked enough deliberation

Update, March 9, 2026:

OpenAI released the following statement:

Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.

Original article from March 8, 2026:

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.

Read full article about: Trump administration drafts AI contract rules requiring companies to license systems for "all lawful use"

The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.

The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.

The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.

Read full article about: Anthropic's Claude AI uncovers over 100 security vulnerabilities in Firefox

Mozilla and Anthropic have teamed up to find more than 100 bugs in Firefox. Anthropic used its Claude AI model to scan the browser's codebase for security flaws, and the model found 14 serious vulnerabilities, 22 official security advisories (CVEs), and 90 additional bugs. All critical vulnerabilities have been patched in Firefox 148, Mozilla says.

Bar chart showing Firefox vulnerability discoveries spiking in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model.
Firefox vulnerability discoveries spiked in February 2026, nearly tripling compared to previous months. Of the 52 CVEs found, 22 trace back to Anthropic's Opus 4.6 AI model. | Image: Anthropic

Claude identified entire classes of errors that conventional automated testing methods like fuzzing had missed despite decades of use, according to Mozilla. Anthropic delivered reproducible test cases alongside its findings, making the review process significantly easier. Going forward, Mozilla plans to integrate AI-powered code analysis into its internal security workflow.

Anthropic says it picked Firefox as a testing ground because it's one of the most heavily scrutinized open-source projects in the world. The company has published a detailed technical report on its findings. Anthropic also recently shipped a dedicated cybersecurity feature for its in-house AI tool, Claude Code.

Read full article about: OpenAI offers open-source maintainers six months of free ChatGPT Pro and Codex access

OpenAI is launching a new support program for open-source developers. Core maintainers of public software projects can apply for six months of free access to ChatGPT Pro with Codex, API credits, and Codex Security. Codex Security, a new AI tool for code security checks, will be reviewed on a case-by-case basis and only granted selectively due to the capabilities of GPT-5.4, according to OpenAI.

Developers who prefer other programming tools like OpenCode, Cline, or OpenClaw can also apply. Projects that don't meet all the criteria but play an important role in the broader software ecosystem are also welcome to apply. The program builds on OpenAI's existing Codex Open Source Fund, which the company has backed with one million dollars.

Read full article about: OpenAI and Oracle stop expanding their flagship data center in Texas over power supply delays

OpenAI and Oracle have decided not to expand their data center site in Abilene, Texas, beyond the planned 1.2 gigawatts. Oracle has leased eight buildings at the location for OpenAI, designed to house around 400,000 Nvidia Blackwell chips, but only two have been completed so far.

Oracle had pushed to get OpenAI into six more buildings, but both sides passed because the additional power supply wouldn't be available for at least a year. Instead of expanding the current Blackwell generation, OpenAI plans to buy Nvidia's next-generation Vera Rubin chips for a different data center. According to Bloomberg, Nvidia is now trying to get Meta to fill the vacant space, though those talks are still in the early stages.

OpenAI's compute manager Sachin Katti described the Stargate site as already one of the largest AI data center campuses in the country. "We considered expanding it further, but ultimately chose to put that additional capacity in other locations," Katti writes, adding that OpenAI is currently developing more than half a dozen sites across several US states.

Read full article about: Anthropic turns Claude Code into a background worker with local scheduled tasks

Anthropic's coding tool Claude Code now supports local, scheduled tasks through a new /loop command. Users can set up recurring jobs at fixed intervals—minutes, hours, or days—that run in the background as long as Claude Code is active and auto-delete after three days. The feature uses standard cron expressions and the local time zone. One-time natural language reminders like "remind me at 3 PM to push the release branch" are also supported, with up to 50 scheduled tasks per session.

Anthropic developer Thariq Shihipar gives the example of checking error logs every few hours, with Claude Code automatically creating pull requests for fixable bugs. The feature gets especially interesting when connected to other data sources, he says. Claude Code creator Boris Cherny adds use cases like auto-monitoring pull requests with self-fixing or generating morning Slack summaries. A detailed guide is available here.

Claude Code recently received updates adding automated desktop functions, remote control for smartphones, and memory.