Prompt engineers, take note: Jane Manchun Wong has uncovered the system prompt for Waymo's unreleased Gemini AI assistant, a specification over 1,200 lines long buried in the Waymo app's code.
The assistant (still) runs on Gemini 2.5 Flash and helps passengers during their ride. It can answer questions, adjust the air conditioning, and change the music, but it can't steer the vehicle or alter the route. The instructions draw a clear line between the AI assistant (Gemini) and the autonomous driving system (Waymo Driver).
Waymo's system prompt follows a trigger-instruction-response pattern: a trigger defines the situation, the instruction specifies the desired behavior, and examples show wrong and correct answers. | Image: Jane Manchun Wong
The prompt uses a trigger-instruction-response pattern throughout: each rule defines a trigger, an action instruction, and often example responses. Wrong and correct answers appear side by side to clarify the desired behavior. For ambiguous questions: first clarify, then draw conclusions, finally deflect. Hard limits are enforced through prohibition lists with alternative answers. Wong's full analysis has many more details.
Australia's financial regulator, Austrac, is pushing back against banks that rely too heavily on AI to generate suspicious activity reports (SARs). According to industry sources, Austrac officials have met with several banks recently to urge more careful use of AI. One major bank was reportedly reprimanded in a private meeting.
Banks have used machine learning to flag suspicious transactions for years. But the shift toward modern large language models only picked up over the past two years, as banks saw the technology as a way to cut costs.
Austrac deputy chief executive Katie Miller said the agency doesn't want a flood of "low-quality" computer-generated reports packed with data but lacking real intelligence value. She warned that banks might be submitting large volumes of reports simply to avoid penalties.
The banks are leaning towards the ends of higher quality but smaller amounts. The more data you’ve got, there's a problem of noise. If banks were looking to use artificial intelligence just to increase the volume (of reports), that’s something we need to assess.
According to Salesforce leadership, confidence in large language models (LLMs) has slipped over the past year.The Information reports the company is now pivoting toward simple, rule-based automation for its Agentforce product while limiting generative AI in certain use cases.
"We all had more confidence in LLMs a year ago," said Sanjna Parulekar, SVP of product marketing at Salesforce. She points to the models' inherent randomness and their tendency to ignore specific instructions as primary reasons for the shift.
A spokesperson denied the company is backtracking on LLMs, stating they are simply being more intentional about their use. The Agentforce platform, currently on track for over $500 million in annual sales, allows users to set deterministic rules that strictly constrain the AI's capabilities.
Nvidia's $20 billion Groq deal is really about blocking Google's TPU momentum
Nvidia is spending $20 billion on a holiday gift to itself: a struggling chip startup and its founders. The deal offers both tax advantages and a defense against Google’s TPUs in one move.
2025 is wrapping up, and we wanted to say thanks. This year, we published over 1,700 articles and 50 newsletters. We hope some of them were useful to you.
With our relaunch done, we're looking forward to 2026—staying on top of things and digging deeper where it matters.
OpenAI's advertising plans for ChatGPT are taking shape. According to The Information, employees are discussing various ad formats for the chatbot. One option would have AI models preferentially weave sponsored content into their responses. So a question about mascara recommendations might surface a Sephora ad. Internal mockups also show ads appearing in a sidebar next to the response window.
Another approach would only show ads after users request further details. If someone asks about a trip to Barcelona and clicks on a suggestion like the Sagrada Familia, sponsored links to tour packages could appear. A spokesperson confirmed to The Information that the company is exploring how advertising might work in the product without compromising user trust.
Qwen has released an improved version of its image editing model that better maintains facial identity during edits. The Chinese AI company published Qwen-Image-Edit-2511 on Hugging Face, an upgrade to the earlier Qwen-Image-Edit-2509. The biggest improvement is how the model handles people. It can now make creative changes to portraits while keeping the subject recognizable, the company claims. Group photos with multiple people also work better now.
The updated model can combine separate portrait images and edit group photos while preserving each person's (or cat's) identity. | Image: Qwen
The update also brings improvements to lighting control, camera angles, industrial product design, and geometric calculations. Qwen has baked popular community LoRAs (small add-on models) directly into the base model. The model ships under the Apache 2.0 license. A demo is available on Hugging Face, and users can test the model for free via Qwen Chat.
Pulitzer Prize winner John Carreyrou and other authors are suing OpenAI, Anthropic, Google, Meta, xAI, and Perplexity for book piracy. The AI companies allegedly stole their works from illegal online libraries. The lawsuit has a strong case, and this time the plaintiffs are going after the big bucks instead of the “pennies” of a class action settlement.