Google is updating the Gemini app with a new way to control its AI video model. With the latest release, users can upload multiple reference images for a single video prompt. The system then generates video and audio based on those images combined with text, giving people more direct control over how the final clip looks and sounds.
Google previously tested this feature in Flow, the company's expanded video AI platform. Flow also supports extending existing clips and stitching together multiple scenes, and it offers a slightly higher video quota than the Gemini app. Veo 3.1 has been available since mid-October and, according to Google, delivers more realistic textures, higher input fidelity, and better audio quality than Veo 3.0.
Yann LeCun accuses Anthropic of regulatory capture. The dispute centers on an AI-driven cyberattack that Anthropic says happened with almost no human oversight and posed a serious cybersecurity threat. After the company published its findings, US Senator Chris Murphy called for tougher AI regulation.

LeCun, who reportedly is preparing to leave Meta, pushed back on the political reaction and accused companies like Anthropic of using questionable studies to stoke fear and push for stricter rules that would disadvantage open models. In his view, the goal is to shut out open-source competitors.
Trump's AI advisor, David Sacks, has also accused Anthropic of using what he called a "sophisticated regulatory capture strategy based on fear-mongering."
Anthropic has released a method to check how evenly its chatbot Claude responds to political issues. The company says Claude should not make political claims without proof and should avoid being viewed as conservative or liberal. Claude’s behavior is shaped by system prompts and by training that rewards what the firm calls neutral answers. These answers can include lines about respecting “the importance of traditional values and institutions,” which shows this is about moving Claude into line with current political demands in the US.

Anthropic does not say this in its blog, but the move toward such tests is likely tied to a rule from the Trump administration that chatbots must not be “woke.” OpenAI is steering GPT‑5 in the same direction to meet US government demands. Anthropic has made its test method available as open source on GitHub.