Meta is testing a shopping research feature in its Meta AI chatbot designed to compete with similar tools from OpenAI's ChatGPT and Google's Gemini. According to Bloomberg, the feature lets users ask for product suggestions. The chatbot responds with a carousel of product images that include brand, website, and price details, along with a brief bullet-point explanation of its recommendations.
The feature is currently rolling out to a limited number of US users in the Meta AI web browser. A Meta spokesperson confirmed the test but didn't share any further details.
Several US federal agencies are already dropping Anthropic's AI products and switching to competitors like OpenAI. According to Reuters, the shift currently affects the State Department, Treasury Department, Department of Health and Human Services, the Pentagon, and the Department of Housing and Urban Development. President Trump ordered all agencies on Friday to phase out Anthropic products within six months. The Department of Defense had previously classified Anthropic as a supply chain risk and signed a deal with OpenAI.
The switch isn't exactly an upgrade, though - at least not yet. The State Department is replacing Anthropic's Claude models in its internal chatbot with OpenAI's outdated GPT-4.1 model.
A calendar invite is all it took to hijack Perplexity's Comet browser and steal 1Password credentials
Security researchers demonstrate how a manipulated calendar invite can trick Perplexity’s agentic Comet browser into stealing local files and taking over a full 1Password account.
ASML, the world's sole manufacturer of EUV lithography machines used to produce advanced chips, is looking to expand beyond its core business. That's according to a Reuters report citing ASML Chief Technology Officer Marco Pieters.
The Dutch company is specifically planning to move into advanced packaging - a technique where multiple specialized chips are connected and stacked on top of each other. This approach is critical for modern AI chips and the high-bandwidth memory that feeds them. TSMC already uses advanced packaging to build Nvidia's most powerful AI processors, among others.
Pieters told Reuters that ASML is planning 10 to 15 years ahead, studying what kinds of machines the industry will need for packaging and bonding. The company is also exploring whether chips can be printed beyond their current size limit. On top of that, ASML wants to use AI to speed up the control software running its machines and improve quality checks during chip manufacturing.
Thousands of procurement documents show how China's army wants to weaponize AI
Researchers at Georgetown University have analyzed thousands of procurement requests from China’s People’s Liberation Army. The documents reveal how broadly Beijing is already experimenting with military AI, from drone swarms and deepfake tools to autonomous decision-making systems.
Anthropic's new prompt forces ChatGPT to reveal everything it knows about you
Anthropic is capitalizing on OpenAI’s bad press with a new import function for Claude. A single prompt exports your saved context from ChatGPT or other chatbots, letting you transfer it straight to Claude’s memory.
Artificial Analysis has released version 2.0 of its AA-WER speech-to-text benchmark. ElevenLabs' Scribe v2 leads with a word error rate of just 2.3 percent, followed by Google's Gemini 3 Pro (2.9%) and Mistral's Voxtral Small (3.0%). Google's Gemini 3 Flash (3.1%) and ElevenLabs' older Scribe v1 (3.2%) are close behind. Notably, Google didn't specifically train for transcription—the strong results come from Gemini's general multimodal capabilities. OpenAI's popular open-source Whisper Large v3 (4.2%) lands mid-pack, while Alibaba's Qwen3 ASR Flash (5.9%), Amazon's Nova 2 Omni (6.0%), and Rev AI (6.1%) bring up the rear.
ElevenLabs' Scribe v2 tops the AA-WER v2.0 overall ranking with the lowest word error rate, followed by Google's Gemini 3 Pro and Mistral's Voxtral Small. | Image: Artificial Analysis
The results hold up in the separate AA-AgentTalk test for speech directed at voice assistants: Scribe v2 (1.6%) and Gemini 3 Pro (1.7%) pull well ahead, with AssemblyAI's Universal-3 Pro taking third at 2.3%.
ElevenLabs' Scribe v2 and Google's Gemini 3 Pro also dominate the AA-AgentTalk voice assistant test with the lowest error rates. | Image: Artificial Analysi