Ad
Short

Yann LeCun accuses Anthropic of regulatory capture. The dispute centers on an AI-driven cyberattack that Anthropic says happened with almost no human oversight and posed a serious cybersecurity threat. After the company published its findings, US Senator Chris Murphy called for tougher AI regulation.

Chris Murphy and Yann LeCun reacted publicly after Anthropic warned about a large-scale AI-driven cyberattack. | Image: X

LeCun, who reportedly is preparing to leave Meta, pushed back on the political reaction and accused companies like Anthropic of using questionable studies to stoke fear and push for stricter rules that would disadvantage open models. In his view, the goal is to shut out open-source competitors.

Trump's AI advisor, David Sacks, has also accused Anthropic of using what he called a "sophisticated regulatory capture strategy based on fear-mongering."

Short

Anthropic has released a method to check how evenly its chatbot Claude responds to political issues. The company says Claude should not make political claims without proof and should avoid being viewed as conservative or liberal. Claude’s behavior is shaped by system prompts and by training that rewards what the firm calls neutral answers. These answers can include lines about respecting “the importance of traditional values and institutions,” which shows this is about moving Claude into line with current political demands in the US.

Gemini 2.5 Pro is rated most neutral at 97 percent, ahead of Claude Opus 4.1 (95%), Sonnet 4.5 (94%), GPT‑5, Grok 4, and Llama 4. | via Anthropic

Anthropic does not say this in its blog, but the move toward such tests is likely tied to a rule from the Trump administration that chatbots must not be “woke.” OpenAI is steering GPT‑5 in the same direction to meet US government demands. Anthropic has made its test method available as open source on GitHub.

Ad
Ad
Short

Linkedin is introducing a generative AI search feature for Premium users in the US, aiming to make it easier to find the right people. Instead of relying on exact keywords, users can now enter natural language prompts like "someone who has built a small business" or "a digital marketing professional." Previously, the search required details like a company name or job title to get relevant results.

The new search tool uses Linkedin's own data to deliver more flexible and relevant matches. Linkedin says it plans to expand the feature to other countries soon. The company has also started using user data for AI training by default, but anyone who wants to opt out can do so in their account settings.

Ad
Ad
Short

Firefox is testing a new feature called "AI Window," giving users a dedicated space in the browser to interact with an AI assistant when they want it. Mozilla frames the tool as a controlled chat panel that can support browsing tasks without reshaping how people use the browser. The company says users decide if, when, and how the feature appears, and they can disable it at any time.

Image: Mozilla

Mozilla positions this approach as a contrast to browsers like OpenAI's Atlas, where AI is either always present or completely absent, or, in Mozilla's view, nudges users into open-ended chat sessions. With AI Window, Firefox aims to maintain its identity as a fast, private, and independent browser, treating AI as an optional layer built around transparency and user control. Those interested can join Mozilla's waiting list. All AI browsers released so far have already faced significant security issues.

Short

OpenAI is testing a group chat feature for ChatGPT in Japan, South Korea, Taiwan, and New Zealand. Users on Free, Go, Plus, and Pro plans can chat together with other people and ChatGPT in the same conversation. The system won't pull in personal memories from private chats. ChatGPT jumps in based on context or when someone addresses it directly.

Image: OpenAI

The responses run on the GPT-5.1-Auto model. Participants can join through invitation links, manage groups, and customize ChatGPT's settings individually. Users under 18 get automatic content restrictions, and parents can disable the feature entirely.

Ad
Ad
Short

According to the Wall Street Journal, Amazon, Microsoft, and AI startup Anthropic are backing a US law that would further restrict Nvidia's chip exports to China. The proposed Gain AI Act would require semiconductor companies to satisfy US demand first before shipping chips to countries under arms embargos. The law would give tech giants like Amazon and Microsoft priority access to chips.

Nvidia opposes the plan, warning it would create unnecessary market interference. Some government officials question whether the law is even needed, pointing out that the Commerce Department already has the authority to enforce export controls. Meta and Google haven't commented on the proposal. The Gain AI Act could be attached to the defense budget as an amendment.

Google News