Ad
Short

Mozilla’s latest experiment in Firefox Labs introduces a feature called Link Preview. With this feature enabled, holding down Shift and Alt while hovering over any link brings up a preview card. The card displays the page title, a short description, an image, estimated reading time, and three automatically generated bullet points that summarize the content. Instead of sending your browsing data to the cloud, Firefox uses the SmolLM2-360M language model from Hugging Face to generate these previews. The company has plans to improve language support, boost the speed and quality of the previews, and is even considering bringing the feature to Android in the future. Link Preview is optional and can be enabled through Firefox Labs.

Short

xAI has rolled out three new features for its Grok voice assistant: Grok Vision, multilingual audio output, and real-time search in voice mode. According to the company, all three features are now available to iOS users. Android users with a SuperGrok subscription also get access to multilingual audio and real-time search. Grok Vision allows the assistant to provide live commentary on whatever appears on the smartphone screen. Google and OpenAI have been offering similar features for some time, using language models to interpret on-screen content in real time. The update is part of a broader push by xAI—Elon Musk’s artificial intelligence start-up—to compete with companies like Google and OpenAI. xAI recently introduced a new reasoning model called Grok 3 mini.

Ad
Ad
Short

Articles from the Washington Post will now appear in ChatGPT responses under a new content licensing agreement between the two companies. The integration includes coverage of politics, world affairs, business, and technology, with direct source citations provided in answers. "We’re all in on meeting our audiences where they are," said Peter Elkins-Williams, director of global partnerships at The Washington Post. The partnership follows a broader trend of exclusive licensing deals between media outlets and AI companies. Here's my usual caveat: Such arrangements can reduce media diversity, posing risks to democratic discourse and the open Web. Journalism scholar Jeff Jarvis has called these payments to publishers "pure lobbying."

Ad
Ad
Short

AI researcher Sebastian Raschka has published a new analysis that looks at how reinforcement learning is used to improve reasoning in large language models (LRMs). In a blog post, he describes how algorithms are used in combination with training methods such as Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from Verifiable Rewards (RLVR). Raschka focuses on DeepSeek-R1, a model trained using verifiable rewards instead of human labels, to explain in detail how reinforcement learning can improve problem-solving performance.

While reasoning alone isn’t a silver bullet, it reliably improves model accuracy and problem-solving capabilities on challenging tasks (so far). And I expect reasoning-focused post-training to become standard practice in future LLM pipelines.

Ad
Ad
Google News