OpenAI employees hint at a new omni model
A new omni model from OpenAI? Employee posts and a leaked audio project called “BiDi” suggest the company is working on its next big multimodal upgrade.
A new omni model from OpenAI? Employee posts and a leaked audio project called “BiDi” suggest the company is working on its next big multimodal upgrade.
Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.
Luma AI takes on OpenAI and Google with Uni-1, a model that combines image understanding and generation in a single architecture and reasons through prompts as it creates.
Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.
Update, March 9, 2026:
OpenAI released the following statement:
Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.
Original article from March 8, 2026:
OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.
The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.
The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.
The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.
Mozilla and Anthropic have teamed up to find more than 100 bugs in Firefox. Anthropic used its Claude AI model to scan the browser's codebase for security flaws, and the model found 14 serious vulnerabilities, 22 official security advisories (CVEs), and 90 additional bugs. All critical vulnerabilities have been patched in Firefox 148, Mozilla says.

Claude identified entire classes of errors that conventional automated testing methods like fuzzing had missed despite decades of use, according to Mozilla. Anthropic delivered reproducible test cases alongside its findings, making the review process significantly easier. Going forward, Mozilla plans to integrate AI-powered code analysis into its internal security workflow.
Anthropic says it picked Firefox as a testing ground because it's one of the most heavily scrutinized open-source projects in the world. The company has published a detailed technical report on its findings. Anthropic also recently shipped a dedicated cybersecurity feature for its in-house AI tool, Claude Code.