Google is borrowing Anthropic's memory import approach, letting Gemini users bring over saved reminders, preferences, and full chat histories from apps like ChatGPT and Claude. The process works by copying a suggested prompt into the previous AI app, generating a summary, and pasting it into Gemini, which saves the information in its own context. Users can also upload chat histories as a ZIP file (up to 5 GB) and continue previous conversations inside Gemini. Google is renaming "Past Chats" to "Memory," with the rollout happening gradually.
Google's new memory import feature in Gemini: users copy a prompt into their previous AI app, then paste the generated summary into Gemini. | Image: Google
Anthropic pioneered this approach after OpenAI drew criticism for a military deal Anthropic had turned down on ethical grounds. With users already looking to switch, Anthropic wanted to give them an extra reason to make the move. Both Google and Anthropic rely on the same basic method for data extraction—a simple prompt that asks the existing AI app to output everything it has stored about the user.
Canadian AI company Cohere has released "Transcribe," a new open-source model for automatic speech recognition. The company says it claims the top spot on the Hugging Face Open ASR Leaderboard with an average word error rate of just 5.42 percent, beating out competitors like OpenAI's Whisper Large v3, ElevenLabs Scribe v2, and Qwen3-ASR-1.7B. Cohere says Transcribe also delivers the best throughput among similarly sized models.
Cohere Transcribe compared with seven other speech recognition models. Models closer to the upper left corner perform best, meaning faster throughput and lower word error rates. | Image: CohereThe 2 billion parameter model supports 14 languages, including English, German, French, and Japanese. It's available for download under the Apache 2.0 license on Hugging Face and can also be accessed through Cohere's API and the Model Vault platform. Cohere plans to integrate Transcribe into its AI agent platform North in the future.
Anthropic has secured a preliminary injunction against the Trump administration in a federal court in San Francisco. Judge Rita Lin temporarily blocked President Trump's order banning federal agencies from using Anthropic's AI models, along with the Pentagon's classification of the company as a security risk.
Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. [...] Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.
Anthropic leak reveals new model "Claude Mythos" with "dramatically higher scores on tests" than any previous model
Update: The leaked draft blog posts have surfaced online, revealing Anthropic’s plans for a new model class above its existing Opus line. The documents show two possible name candidates, details about a deliberately slow release strategy, and a strong focus on cybersecurity.
Meta's own supervisory body warns that Community Notes are no match for AI disinformation
Meta’s Oversight Board has examined the planned global expansion of Community Notes. Its conclusion: the system is too slow, too thinly staffed, and vulnerable to manipulation, especially given the growing flood of AI-generated disinformation. In certain countries, Meta should not introduce the program at all.
OpenAI is adding plugins to Codex that integrate with popular work tools like Slack, Figma, Notion, Gmail, and Google Drive. The plugins go beyond coding - OpenAI says they also help with planning, research, and coordination. Under the hood, plugins bundle predefined prompt workflows ("skills"), app integrations, and MCP server configurations into installable packages, similar to ChatGPT integrations. They work across the Codex app, command line, and IDE extensions. Developers can build their own and distribute them through local or team-wide "marketplaces." An official curated directory is already live, with self-publishing coming soon.
Apple has secured broad access rights to Google's Gemini models. According to The Information, Apple has full access to Gemini within its own data centers and can use distillation to build smaller models from it. Gemini generates high-quality answers along with its chain of thought, which then serve as training data for a smaller model. In short, Apple is paying for what Chinese AI companies are allegedly doing in secret: tapping a powerful AI model to generate quality training data for a smaller one.
Because Apple has full access, it can build smaller versions that give the same answers as Gemini and arrive at them the same way. These lighter versions need far less processing power and can run directly on Apple devices.
Since Gemini is built for chatbots and enterprise applications, it doesn't always line up with Apple's plans for Siri, according to The Information. But Apple is still building its own models in parallel through its Apple Foundation Models team. New AI features could drop at Apple's developer conference in June.