Anthropic has launched the "Anthropic Institute," an internal think tank dedicated to studying how powerful AI affects society, the economy, and security. The institute will be led by co-founder Jack Clark, who is taking on a new role as "Head of Public Benefit."
The institute plans to research how AI is transforming jobs, what new risks emerge from misuse, what "values" AI systems express, and how humans can maintain control over self-improving AI systems.
The team consists of around 30 people drawn from three existing research groups: the Frontier Red Team, the Societal Impacts team, and the economics research team. Early hires include Matt Botvinick (formerly Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).
The launch comes at a turbulent time for the company. Anthropic has sued 17 federal agencies and the Executive Office of the President after being classified as a supply chain risk. According to The Verge, Clark said he has "no concerns" about research funding. Anthropic is also opening an office in Washington, D.C.
An AI agent hacked McKinsey's internal AI platform in two hours using a decades-old technique
Security firm Codewall turned an offensive AI agent loose on McKinsey’s internal AI platform Lilli, a system used by over 43,000 employees for strategy work, client research, and document analysis. No credentials, no insider knowledge, no human assistance. Within two hours, the agent had full read and write access to the production database.
A federal court in San Francisco has granted Amazon an injunction against AI startup Perplexity, barring it from using its AI browser agent Comet to make purchases on Amazon.
Amazon sued Perplexity in November, accusing the startup of fraud because Comet didn't disclose when it was shopping on behalf of a real person and ignored Amazon's demands to stop. The case raises a growing legal question: how should courts handle AI agents taking on complex tasks like online shopping?
Judge Maxine Chesney ruled that Amazon presented strong evidence that Perplexity was accessing users' password-protected accounts with their permission but without Amazon's authorization. Perplexity must also delete any collected Amazon data and has one week to appeal.
OpenAI is rolling out dynamic visual explanations for more than 70 math and science concepts in ChatGPT. Users can tweak variables in real time and see the effects on graphs and formulas instantly. For now, the topics are geared mainly toward high school and college students, covering things like binomial squares, exponential decay, Ohm's law, compound interest, and trigonometric identities.
According to OpenAI, the interactive explanations are available now to all logged-in users worldwide, regardless of their subscription plan. Over time, OpenAI plans to expand the learning modules to cover additional subjects.
German court says "It's AI" isn't enough to void copyright
A German regional court has ruled that song lyrics written by a human are still protected by copyright, even if the music was made with AI tools like SunoAI. Simply claiming a work is AI-generated isn’t enough to strip that protection, you need proof.
Following a series of allegedly AI-caused outages, Amazon is turning its senior engineers into human filters for AI-generated code.
"Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," writes Dave Treadwell, Senior Vice President at Amazon, in an internal email obtained by the Financial Times. A briefing identifies a "trend of incidents" with a "high blast radius," linked to "Gen-AI assisted changes." Recently, there have been reports that Amazon's AI coding tools may have also contributed to two AWS outages.
The consequence: Junior and mid-level engineers now require sign-off from a senior engineer for all AI-assisted code changes. Standard code reviews have always existed at Amazon, but a dedicated approval requirement specifically for AI-generated output is new. Experienced developers are thus effectively becoming human quality filters for machine-generated code. Their role is shifting: away from building, toward reviewing what the machine has built.
Among the causes, the internal briefing cites "novel GenAI usage for which best practices and safeguards are not yet fully established."
Nvidia and Thinking Machines Lab, the AI startup founded by former OpenAI executive Mira Murati, are entering a long-term partnership. Thinking Machines will receive at least one gigawatt of compute power through Nvidia's new Vera Rubin systems to train its own AI models. Deployment is set to begin early next year.
Nvidia has also taken a financial stake in Thinking Machines, though the exact amount wasn't disclosed. The startup had already raised around $2 billion in a seed round led by Andreessen Horowitz, at a valuation of $12 billion. Nvidia was an investor in that round as well. Most recently, Thinking Machines is reportedly seeking another funding round. The startup has also seen some departures - co-founders Barret Zoph and Luke Metz returned to OpenAI.
Together, the two companies plan to develop training and deployment systems for Nvidia hardware and make frontier AI models available to businesses and researchers. Murati left OpenAI in 2024 and co-founded Thinking Machines Lab.
Meta has acquired Moltbook, a platform best described as a Reddit for AI agents. Founders Matt Schlicht and Ben Parr are joining Meta's Superintelligence Labs (MSL), led by former Scale AI CEO Alexandr Wang, Axios reports. The purchase price wasn't disclosed, and the deal is expected to close in mid-March.
So what does Meta see in it? In a blog post obtained by Axios, Meta's Vishal Shah explains: "The Moltbook team has given agents a way to verify their identity and connect with one another on their human's behalf. This establishes a registry where agents are verified and tethered to human owners." Existing customers can keep using Moltbook temporarily.