Ad
Skip to content

Pentagon plans to let AI companies train models on classified data

The US Department of War is working to set up secure environments where AI companies can train their models on classified data. Until now, models were only allowed to read classified data, not learn from it.

Beijing approves Nvidia's H200 chip sales as the company builds a China-ready version of its Groq inference chip

Nvidia has received long-awaited approval from Beijing to sell its second-most-powerful AI chip, the H200, to Chinese customers, Reuters reports. The company had halted production of the chip last year due to regulatory hurdles on both sides of the Pacific.

OpenAI's own wellbeing advisors warned against erotic mode, called it a "sexy suicide coach"

OpenAI’s wellbeing advisory board reportedly voted unanimously against the company’s planned Adult Mode for ChatGPT. Internally, the company is struggling with an error-prone age detection system and unresolved safety issues.

Read full article about: US War Department CTO says Anthropic's AI models "pollute" the supply chain with built-in ethics

Emil Michael, the US Department of War's chief technology officer, made clear that classifying Anthropic as a supply chain risk is an ideologically motivated move. Claude models "pollute" the supply chain because they have a "different policy preference" baked into them, Michael told CNBC. He pointed to Anthropic's "constitution," a ruleset emphasizing ethics and safety, which he said could result in soldiers receiving "ineffective weapons, ineffective body armor, ineffective protection." The measure was "not meant to be punitive," he added.

Anthropic is the first US company to receive this classification, which is normally reserved for foreign adversaries. The AI company is suing over the designation and has drawn support from Microsoft, OpenAI, and Google employees, as well as former US military personnel. Anthropic has previously pushed back against its own AI models being used for US mass surveillance and autonomous weapons.

The administration has already signaled its intent to control AI along ideological lines by enacting regulations targeting so-called "woke AI," framed as a commitment to political neutrality. The approach echoes the Chinese government's own efforts to exert political control over AI models.

Comment Source: CNBC
Read full article about: Anthropic launches internal think tank to study AI's impact on society and security

Anthropic has launched the "Anthropic Institute," an internal think tank dedicated to studying how powerful AI affects society, the economy, and security. The institute will be led by co-founder Jack Clark, who is taking on a new role as "Head of Public Benefit."

The institute plans to research how AI is transforming jobs, what new risks emerge from misuse, what "values" AI systems express, and how humans can maintain control over self-improving AI systems.

The team consists of around 30 people drawn from three existing research groups: the Frontier Red Team, the Societal Impacts team, and the economics research team. Early hires include Matt Botvinick (formerly Google DeepMind), Anton Korinek (University of Virginia), and Zoe Hitzig (previously at OpenAI).

The launch comes at a turbulent time for the company. Anthropic has sued 17 federal agencies and the Executive Office of the President after being classified as a supply chain risk. According to The Verge, Clark said he has "no concerns" about research funding. Anthropic is also opening an office in Washington, D.C.