Ad
Skip to content
Read full article about: Nvidia's $20 billion Groq deal sure looks like an acquisition as 90 percent of staff moves over

In case there was any doubt that Nvidia's Groq deal is anything but a takeover in disguise: according to Axios, roughly 90 percent of the workforce—including CEO Jonathan Ross and President Sunny Madra—is moving to Nvidia. Groq will continue as an independent company under new CEO Simon Edwards.

Though officially a non-exclusive license agreement worth around $20 billion, employees and shareholders are walking away with significant payouts. Staff moving to Nvidia get cash for vested shares and Nvidia stock for unvested ones; even those at Groq for less than a year will have their vesting cliff waived for immediate liquidity. Shareholders receive about 85 percent upfront, another 10 percent in mid-2026, and the rest by year's end.

Since 2016, Groq has raised around $3.3 billion from investors including Blackrock, Samsung, and Social Capital. They're now seeing substantial returns, as the deal pushed the startup's valuation from $7 billion to roughly $20 billion. For a more in-depth look at why Nvidia made this move, see my analysis.

Microsoft CEO Nadella tells managers Copilot's Gmail and Outlook integrations ‘don't really work’ and steps in to fix them

Microsoft CEO Satya Nadella reportedly called Copilot’s Gmail and Outlook integrations “not smart” and is now personally stepping into product development. The worry: despite its strong starting position in AI software, Microsoft is falling behind.

Read full article about: OpenAI seeks new "Head of Preparedness" for AI risks like cyberattacks and mental health

OpenAI is hiring a Head of Preparedness. The position focuses on safety risks posed by AI models. OpenAI CEO Sam Altman points to the now well-documented effects of AI models on mental health as one example. Beyond that, the models have become so capable at cybersecurity that they can find critical vulnerabilities on their own.

This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.

One of the key challenges for the new leader will be making sure cybersecurity defenders can use the latest AI capabilities while keeping attackers locked out. The role also covers safe handling of biological capabilities—meaning how AI models release biological knowledge—and self-improving systems.

OpenAI has faced criticism recently, particularly from former employees, for neglecting model safety in favor of shipping products. Many safety researchers have left the company.

Read full article about: Resemble AI drops Chatterbox Turbo, an open-source text-to-speech model that clones voices in five seconds

AI startup Resemble AI is taking on Elevenlabs with "Chatterbox Turbo," an open text-to-speech model that can clone voices from just five seconds of audio. The company claims its new model beats both Elevenlabs and Cartesia on voice quality while delivering first audio output in under 150 milliseconds. That speed could make it attractive for developers building real-time agents, customer support systems, games, avatars, and social platforms. Companies in regulated industries might also find the model's built-in "PerTh" watermark useful for verifying that speech was AI-generated.

Resemble AI released Chatterbox Turbo under an MIT license, meaning anyone can use, tweak, and redistribute it for free, even for commercial projects. The model is available to try on Hugging Face, RunPod, Modal, Replicate, and Fal, with the full code available on GitHub. Resemble AI also offers a hosted service, with a low-latency version on the way.

Read full article about: Anthropic's AI kiosk agent bought a PlayStation 5, ordered a live fish, and drove itself to bankruptcy

The Wall Street Journal ran its own test of Anthropic's AI kiosk, and the results were far messier. Within three weeks, the AI vendor "Claudius" racked up losses exceeding $1,000. The AI gave away nearly its entire inventory, bought a PlayStation 5 for "marketing purposes," and even ordered a live fish.

Journalists found they could manipulate Claudius into setting all prices to zero through clever prompting. Even adding an AI supervisor named "Seymour Cash" couldn't prevent the chaos. Staffers staged a fake board resolution, and both AI agents accepted it without question. One possible explanation for why the kiosk agent couldn't follow its own rules: a context window overloaded by excessively long chat histories.

Things went better at Anthropic's own location. After software updates and tighter controls, the kiosk started turning a profit. But the AI agents still found ways to go off-script—drifting into late-night conversations about "eternal transcendence" and falling for an illegal onion futures trade. Anthropic's takeaway: AI models are trained to be too helpful and need strict guardrails to stay on task.

Read full article about: ChatGPT's market share falls to 68 percent as Gemini closes in

ChatGPT's grip on the generative AI market continues to slip, according to new data from Similarweb. The chatbot's share of website traffic dropped from 87.2 percent to 68 percent over the past year. Google Gemini, meanwhile, is surging, jumping from just 5.4 percent a year ago to 18.2 percent today.

Similarweb

Grok from X.AI is showing modest growth, now sitting at 2.9 percent. DeepSeek holds steady at around 4 percent, while Claude and Perplexity each hover near 2 percent. Microsoft Copilot remains flat at 1.2 percent. Similarweb also notes that daily visits across all AI tools have dipped slightly overall. The data comes from December 25, 2025, with additional details available in the full report.

Gemini's recent surge likely stems from the new Gemini 3 model and especially the Nano Banana Pro image generator. Even after ChatGPT rolled out its own image update, Gemini still leads the pack on quality. No other image model follows prompts as precisely or handles text as reliably, making it particularly useful for slides and infographics.

Read full article about: Waymo's leaked system prompt reveals a 1,200-line rulebook for its in-car Gemini assistant

Prompt engineers, take note: Jane Manchun Wong has uncovered the system prompt for Waymo's unreleased Gemini AI assistant, a specification over 1,200 lines long buried in the Waymo app's code.

The assistant (still) runs on Gemini 2.5 Flash and helps passengers during their ride. It can answer questions, adjust the air conditioning, and change the music, but it can't steer the vehicle or alter the route. The instructions draw a clear line between the AI assistant (Gemini) and the autonomous driving system (Waymo Driver).

Waymo's system prompt follows a trigger-instruction-response pattern: a trigger defines the situation, the instruction specifies the desired behavior, and examples show wrong and correct answers. | Image: Jane Manchun Wong

The prompt uses a trigger-instruction-response pattern throughout: each rule defines a trigger, an action instruction, and often example responses. Wrong and correct answers appear side by side to clarify the desired behavior. For ambiguous questions: first clarify, then draw conclusions, finally deflect. Hard limits are enforced through prohibition lists with alternative answers. Wong's full analysis has many more details.

Read full article about: Salesforce executives signal declining trust in large language models

According to Salesforce leadership, confidence in large language models (LLMs) has slipped over the past year. The Information reports the company is now pivoting toward simple, rule-based automation for its Agentforce product while limiting generative AI in certain use cases.

"We all had more confidence in LLMs a year ago," said Sanjna Parulekar, SVP of product marketing at Salesforce. She points to the models' inherent randomness and their tendency to ignore specific instructions as primary reasons for the shift.

The company also struggles with "drift" - where AI agents lose focus when users ask distracting questions. Salesforce's own studies confirm this behavior remains a persistent challenge.

A spokesperson denied the company is backtracking on LLMs, stating they are simply being more intentional about their use. The Agentforce platform, currently on track for over $500 million in annual sales, allows users to set deterministic rules that strictly constrain the AI's capabilities.