Ad
Short

OpenAI is rolling out "ChatGPT for Teachers," a free version of its AI chatbot for verified K-12 teachers in the United States. The offer runs through June 2027 and includes a secure workspace that, according to the company, does not use data for model training by default. OpenAI says teachers are already seeing time savings in lesson planning and other daily tasks.

"Every student today is growing up with AI, and teachers play a central role in helping them learn how to use these tools responsibly and effectively."

OpenAI

The platform meets US privacy standards like FERPA and gives teachers access to the GPT-5.1 Auto model along with integrations for Canva and Google Drive. School administrators can manage and assign licenses centrally. OpenAI is also partnering with groups like the American Federation of Teachers to help educators learn how to use the technology effectively.

 

Ad
Ad
Short

Google is updating the Gemini app with a new way to control its AI video model. With the latest release, users can upload multiple reference images for a single video prompt. The system then generates video and audio based on those images combined with text, giving people more direct control over how the final clip looks and sounds.

Google previously tested this feature in Flow, the company's expanded video AI platform. Flow also supports extending existing clips and stitching together multiple scenes, and it offers a slightly higher video quota than the Gemini app. Veo 3.1 has been available since mid-October and, according to Google, delivers more realistic textures, higher input fidelity, and better audio quality than Veo 3.0.

Ad
Ad
Short

Yann LeCun accuses Anthropic of regulatory capture. The dispute centers on an AI-driven cyberattack that Anthropic says happened with almost no human oversight and posed a serious cybersecurity threat. After the company published its findings, US Senator Chris Murphy called for tougher AI regulation.

Chris Murphy and Yann LeCun reacted publicly after Anthropic warned about a large-scale AI-driven cyberattack. | Image: X

LeCun, who reportedly is preparing to leave Meta, pushed back on the political reaction and accused companies like Anthropic of using questionable studies to stoke fear and push for stricter rules that would disadvantage open models. In his view, the goal is to shut out open-source competitors.

Trump's AI advisor, David Sacks, has also accused Anthropic of using what he called a "sophisticated regulatory capture strategy based on fear-mongering."

Short

Anthropic has released a method to check how evenly its chatbot Claude responds to political issues. The company says Claude should not make political claims without proof and should avoid being viewed as conservative or liberal. Claude’s behavior is shaped by system prompts and by training that rewards what the firm calls neutral answers. These answers can include lines about respecting “the importance of traditional values and institutions,” which shows this is about moving Claude into line with current political demands in the US.

Gemini 2.5 Pro is rated most neutral at 97 percent, ahead of Claude Opus 4.1 (95%), Sonnet 4.5 (94%), GPT‑5, Grok 4, and Llama 4. | via Anthropic

Anthropic does not say this in its blog, but the move toward such tests is likely tied to a rule from the Trump administration that chatbots must not be “woke.” OpenAI is steering GPT‑5 in the same direction to meet US government demands. Anthropic has made its test method available as open source on GitHub.

Ad
Ad
Google News