Anthropic's new prompt forces ChatGPT to reveal everything it knows about you
Anthropic is capitalizing on OpenAI’s bad press with a new import function for Claude. A single prompt exports your saved context from ChatGPT or other chatbots, letting you transfer it straight to Claude’s memory.
Artificial Analysis has released version 2.0 of its AA-WER speech-to-text benchmark. ElevenLabs' Scribe v2 leads with a word error rate of just 2.3 percent, followed by Google's Gemini 3 Pro (2.9%) and Mistral's Voxtral Small (3.0%). Google's Gemini 3 Flash (3.1%) and ElevenLabs' older Scribe v1 (3.2%) are close behind. Notably, Google didn't specifically train for transcription—the strong results come from Gemini's general multimodal capabilities. OpenAI's popular open-source Whisper Large v3 (4.2%) lands mid-pack, while Alibaba's Qwen3 ASR Flash (5.9%), Amazon's Nova 2 Omni (6.0%), and Rev AI (6.1%) bring up the rear.
ElevenLabs' Scribe v2 tops the AA-WER v2.0 overall ranking with the lowest word error rate, followed by Google's Gemini 3 Pro and Mistral's Voxtral Small. | Image: Artificial Analysis
The results hold up in the separate AA-AgentTalk test for speech directed at voice assistants: Scribe v2 (1.6%) and Gemini 3 Pro (1.7%) pull well ahead, with AssemblyAI's Universal-3 Pro taking third at 2.3%.
ElevenLabs' Scribe v2 and Google's Gemini 3 Pro also dominate the AA-AgentTalk voice assistant test with the lowest error rates. | Image: Artificial Analysi
The latest generation of large language models—from GPT-5 onward—still struggles when tasks are spread across multiple conversation turns. Researcher Philippe Laban and his team tested current models on six tasks covering code, databases, actions, data-to-text, math, and summarization. Performance drops significantly when information is split across several messages (sharded) instead of a single prompt (concat).
Newer models did slightly better—performance degradation shrank from 39 to 33 percent—but the issue is far from solved. The biggest gains showed up in Python tasks, where some models only lost 10 to 20 percent. Laban suspects real-world losses could be even worse, since the tests used simple user simulations. Users who change their mind mid-conversation would likely cause steeper drops.
Technical tweaks like lowering temperature values don't fix the problem, the original study found. The researchers recommend starting a fresh conversation when things go sideways, ideally by having the model summarize all requests first and using that summary as the starting point for a new chat.
OpenAI calls Stuart Russell a "doomer" in court after its CEO co-signed his AI extinction warning
Fear generates attention, and OpenAI usually knows how to use that. But in court, the company is trying to discredit an AI expert as a doomsday prophet, even though CEO Sam Altman spent years spreading the same warnings when they still served his own agenda.
In a letter to AI Minister Evan Solomon, OpenAI has promised the Canadian government it will tighten its safety protocols. The move follows a fatal shooting at a school in Tumbler Ridge, British Columbia, that killed eight people. The suspect, Jesse Van Rootselaar, had previously interacted with ChatGPT. An internal algorithm flagged the interactions as possible warnings of real-world violence, and OpenAI employees reviewed them. The company blocked the account but ultimately decided not to contact police.
According to the Wall Street Journal, OpenAI now plans to adopt more flexible criteria for sharing account data with authorities, establish direct lines of communication with Canadian law enforcement, and improve its systems for detecting evasion tactics. OpenAI Vice President Ann O'Leary said the account would have been reported under the new rules. Canada's Justice Minister Sean Fraser warned that new AI regulations could follow if OpenAI doesn't act quickly.
Anthropic says it will take the US government to court after Secretary of Defense Pete Hegseth moved to classify the AI company as a supply chain risk, a designation previously reserved for foreign adversaries. Anthropic calls the classification illegal and says it will "challenge any supply chain risk designation in court."
We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.
Anthropic
Hegseth also implied military suppliers should no longer be allowed to do business with Anthropic. But according to Anthropic, there's no legal basis for that move: the classification under 10 USC 3252 only applies to the use of Claude in direct contracts with the Department of Defense. For private customers, commercial contracts, and access through the API or claude.ai, nothing would change.
OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies
OpenAI struck a deal with the Pentagon just hours after Anthropic was barred from government contracts. OpenAI claims to operate under the same safety principles as Anthropic, but the language both companies have used so far suggests differences.
Anthropic's dispute with the Pentagon is now rippling through Google and OpenAI. According to the New York Times, more than 100 Google AI employees sent a letter to chief scientist Jeff Dean—who had previously voiced support for Anthropic's position—demanding that Google draw the same red lines: no surveillance of American citizens and no autonomous weapons without human oversight through Gemini. Separately, nearly 50 OpenAI and 175 Google employees published an open letter criticizing the Pentagon's negotiating tactics.
We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.
According to the Wall Street Journal, OpenAI CEO Sam Altman told his employees that OpenAI is working on its own Pentagon contract that would include the same safety guidelines Anthropic is pushing for. Altman hopes to find a solution that works for other AI companies as well.