Ad
Skip to content
Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits

Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.

A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.

"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."

A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.

Comment Source: FT
Ad

U.S. Military strikes 3,000 targets in Iran with AI support, but oversight remains "underinvested"

The Wall Street Journal confirms and expands on previous reports about the massive use of generative AI in the U.S. military campaign against Iran. New details reveal how deeply AI is already embedded in intelligence, targeting, and logistics.

Anthropic's Claude Opus 4.6 saw through an AI test, cracked the encryption, and grabbed the answers itself

Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.

Ad

Hallucinated references are passing peer review at top AI conferences and a new open tool wants to fix that

Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.

Ad
Read full article about: OpenAI hardware and robotics leader quits over military deal she says lacked enough deliberation

Update, March 9, 2026:

OpenAI released the following statement:

Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.

Original article from March 8, 2026:

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.