Ad
Skip to content
Read full article about: US War Department CTO says Anthropic's AI models "pollute" the supply chain with built-in ethics

Emil Michael, the US Department of War's chief technology officer, made clear that classifying Anthropic as a supply chain risk is an ideologically motivated move. Claude models "pollute" the supply chain because they have a "different policy preference" baked into them, Michael told CNBC. He pointed to Anthropic's "constitution," a ruleset emphasizing ethics and safety, which he said could result in soldiers receiving "ineffective weapons, ineffective body armor, ineffective protection." The measure was "not meant to be punitive," he added.

Anthropic is the first US company to receive this classification, which is normally reserved for foreign adversaries. The AI company is suing over the designation and has drawn support from Microsoft, OpenAI, and Google employees, as well as former US military personnel. Anthropic has previously pushed back against its own AI models being used for US mass surveillance and autonomous weapons.

The administration has already signaled its intent to control AI along ideological lines by enacting regulations targeting so-called "woke AI," framed as a commitment to political neutrality. The approach echoes the Chinese government's own efforts to exert political control over AI models.

Comment Source: CNBC
Read full article about: Anthropic launches internal think tank to study AI's impact on society and security

Anthropic has launched the "Anthropic Institute," an internal think tank dedicated to studying how powerful AI affects society, the economy, and security. The institute will be led by co-founder Jack Clark, who is taking on a new role as "Head of Public Benefit."

The institute plans to research how AI is transforming jobs, what new risks emerge from misuse, what "values" AI systems express, and how humans can maintain control over self-improving AI systems.

The team consists of around 30 people drawn from three existing research groups: the Frontier Red Team, the Societal Impacts team, and the economics research team. Early hires include Matt Botvinick (formerly Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).

The launch comes at a turbulent time for the company. Anthropic has sued 17 federal agencies and the Executive Office of the President after being classified as a supply chain risk. According to The Verge, Clark said he has "no concerns" about research funding. Anthropic is also opening an office in Washington, D.C.

German court says "It's AI" isn't enough to void copyright

A German regional court has ruled that song lyrics written by a human are still protected by copyright, even if the music was made with AI tools like SunoAI. Simply claiming a work is AI-generated isn’t enough to strip that protection, you need proof.

Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits

Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.

A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.

"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."

A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.

Comment Source: FT