Ad
Skip to content

OpenAI calls Stuart Russell a "doomer" in court after its CEO co-signed his AI extinction warning

Fear generates attention, and OpenAI usually knows how to use that. But in court, the company is trying to discredit an AI expert as a doomsday prophet, even though CEO Sam Altman spent years spreading the same warnings when they still served his own agenda.

Read full article about: OpenAI promises Canada tighter safety protocols after ChatGPT flagged a shooter's violent chats but never called police

In a letter to AI Minister Evan Solomon, OpenAI has promised the Canadian government it will tighten its safety protocols. The move follows a fatal shooting at a school in Tumbler Ridge, British Columbia, that killed eight people. The suspect, Jesse Van Rootselaar, had previously interacted with ChatGPT. An internal algorithm flagged the interactions as possible warnings of real-world violence, and OpenAI employees reviewed them. The company blocked the account but ultimately decided not to contact police.

According to the Wall Street Journal, OpenAI now plans to adopt more flexible criteria for sharing account data with authorities, establish direct lines of communication with Canadian law enforcement, and improve its systems for detecting evasion tactics. OpenAI Vice President Ann O'Leary said the account would have been reported under the new rules. Canada's Justice Minister Sean Fraser warned that new AI regulations could follow if OpenAI doesn't act quickly.

OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies

OpenAI struck a deal with the Pentagon just hours after Anthropic was barred from government contracts. OpenAI claims to operate under the same safety principles as Anthropic, but the language both companies have used so far suggests differences.

Read full article about: Google Deepmind and OpenAI employees demand Anthropic-style red lines on Pentagon surveillance and autonomous weapons

Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Quote from the open letter "We will not be divided"

According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.

Read full article about: Figma and OpenAI connect design and code through new Codex integration

A new integration links Figma's design platform directly with OpenAI's Codex. Teams can automatically generate editable Figma designs from code and convert designs into working code. It runs on the open MCP standard, supports Figma Design, Figma Make, and FigJam, and is set up in the Codex desktop app for macOS.

Until now, moving between Figma and code was mostly a one-way street. Dev Mode offered basic HTML/CSS snippets, plugins exported designs as React or HTML, and Figma Make generated React components from text input. These tools worked in isolation without understanding the full project. The new integration creates an end-to-end connection where the AI accesses code, Figma files, and the design system simultaneously.

Figma was one of the first partners with its own ChatGPT app and uses ChatGPT Enterprise internally. According to OpenAI, over one million people access Codex weekly, with usage up more than 400 percent since the start of the year.

Read full article about: OpenAI ships API upgrades targeting voice reliability and agent speed for developers

OpenAI has shipped two API updates for developers: the new gpt-realtime-1.5 model for the real-time API is designed to make voice commands more reliable. In internal testing, OpenAI saw roughly a ten percent improvement in transcribing numbers and letters, a five percent bump in logical audio tasks, and seven percent better instruction following. The audio model has also been updated to version 1.5.

The Responses API also now supports WebSockets. Instead of retransmitting the full context with every request, this opens a persistent connection that only sends new data as it comes in. According to OpenAI, the change speeds up complex AI agents with many tool calls by 20 to 40 percent.

Read full article about: OpenAI wants to retire the AI coding benchmark that everyone has been competing on

OpenAI says the SWE-bench Verified programming benchmark has lost its value as a meaningful measure of AI coding ability. The company points to two main problems: at least 59.4 percent of the benchmark's tasks are flawed, rejecting correct solutions because they enforce specific implementation details or check functions not described in the task.

Many tasks and solutions have also leaked into leading models' training data. OpenAI reports that GPT-5.2, Claude Opus 4.5, and Gemini 3 Flash Preview could reproduce some original fixes from memory, meaning benchmark progress increasingly reflects what a model has seen, not how well it codes. OpenAI recommends SWE-bench Pro instead and is building its own non-public tests.

There's a possible strategic angle here: a "contaminated" benchmark can make rivals—especially open-source models—look better and skew rankings. SWE-bench Verified was long the gold standard for AI coding evaluation, with OpenAI, Anthropic, Google, and many Chinese open-weight models competing for small leads. AI benchmarks can provide useful signal, but their real-world value remains limited.

Read full article about: OpenAI partners with major consulting firms to push Frontier agent platform

OpenAI has launched a new partner program called "Frontier Alliances." The initiative aims to bring the company's recently introduced Frontier platform to large enterprise customers. Frontier lets businesses build AI agents that handle tasks independently, from processing customer inquiries and pulling CRM data to verifying policies. Details about the platform remain scarce at this point. For now, Frontier is only available to a select group of customers. For now, Frontier is only available to a select group of customers.

To get Frontier into major corporations, OpenAI has signed multi-year partnerships with Boston Consulting Group (BCG), McKinsey, Accenture, and Capgemini. BCG and McKinsey are taking on strategy, organizational restructuring, and rollout planning, while Accenture and Capgemini handle the technical side, integrating Frontier with existing systems and data infrastructure. All four partners are standing up dedicated teams that will be certified in OpenAI's technology.

OpenAI CEO Sam Altman warns "the world is not prepared" as OpenAI accelerates research using its own AI

Sam Altman says AGI is “pretty close” and superintelligence “not that far off.” Speaking at the Express Adda event in India, the OpenAI CEO suggested the company’s internal models are already accelerating its own research and that “the world is not prepared” for what’s coming.