In case there was any doubt that Nvidia's Groq deal is anything but a takeover in disguise: according to Axios, roughly 90 percent of the workforce—including CEO Jonathan Ross and President Sunny Madra—is moving to Nvidia. Groq will continue as an independent company under new CEO Simon Edwards.
Though officially a non-exclusive license agreement worth around $20 billion, employees and shareholders are walking away with significant payouts. Staff moving to Nvidia get cash for vested shares and Nvidia stock for unvested ones; even those at Groq for less than a year will have their vesting cliff waived for immediate liquidity. Shareholders receive about 85 percent upfront, another 10 percent in mid-2026, and the rest by year's end.
Microsoft CEO Nadella tells managers Copilot's Gmail and Outlook integrations ‘don't really work’ and steps in to fix them
Microsoft CEO Satya Nadella reportedly called Copilot’s Gmail and Outlook integrations “not smart” and is now personally stepping into product development. The worry: despite its strong starting position in AI software, Microsoft is falling behind.
This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.
One of the key challenges for the new leader will be making sure cybersecurity defenders can use the latest AI capabilities while keeping attackers locked out. The role also covers safe handling of biological capabilities—meaning how AI models release biological knowledge—and self-improving systems.
AI startup Resemble AI is taking on Elevenlabs with "Chatterbox Turbo," an open text-to-speech model that can clone voices from just five seconds of audio. The company claims its new model beats both Elevenlabs and Cartesia on voice quality while delivering first audio output in under 150 milliseconds. That speed could make it attractive for developers building real-time agents, customer support systems, games, avatars, and social platforms. Companies in regulated industries might also find the model's built-in "PerTh" watermark useful for verifying that speech was AI-generated.
Resemble AI released Chatterbox Turbo under an MIT license, meaning anyone can use, tweak, and redistribute it for free, even for commercial projects. The model is available to try on Hugging Face, RunPod, Modal, Replicate, and Fal, with the full code available on GitHub. Resemble AI also offers a hosted service, with a low-latency version on the way.
Less is more: Meta’s new image model, Pixio, beats more complex competitors at depth estimation and 3D reconstruction, despite having fewer parameters. The training method was considered outdated.
China proposes rules to combat AI companion addiction
China wants to crack down on emotionally manipulative AI chatbots. Under proposed rules, providers would have to detect addictive behavior and step in when users show psychological warning signs. California is taking similar steps after tragic stories linked to AI companions.
Meta brings Segment Anything to audio, letting editors pull sounds from video with a click or text prompt
Filtering a dog bark from street noise or isolating a sound source with a single click on a video: Meta’s SAM Audio brings the company’s visual segmentation approach to the audio world. The model lets users edit audio using text commands, clicks, or time markers. Code and weights are open source.
The Wall Street Journal ran its own test of Anthropic's AI kiosk, and the results were far messier. Within three weeks, the AI vendor "Claudius" racked up losses exceeding $1,000. The AI gave away nearly its entire inventory, bought a PlayStation 5 for "marketing purposes," and even ordered a live fish.
Journalists found they could manipulate Claudius into setting all prices to zero through clever prompting. Even adding an AI supervisor named "Seymour Cash" couldn't prevent the chaos. Staffers staged a fake board resolution, and both AI agents accepted it without question. One possible explanation for why the kiosk agent couldn't follow its own rules: a context window overloaded by excessively long chat histories.
Things went better at Anthropic's own location. After software updates and tighter controls, the kiosk started turning a profit. But the AI agents still found ways to go off-script—drifting into late-night conversations about "eternal transcendence" and falling for an illegal onion futures trade. Anthropic's takeaway: AI models are trained to be too helpful and need strict guardrails to stay on task.