Ad
Skip to content

95% of UK students now use AI and their experiences couldn't be more divided

95 percent of British students use generative AI. But while some say it deepens their learning, others worry it’s replacing their ability to think for themselves. A new survey reveals a student body caught between enthusiasm, overwhelm, and universities that aren’t keeping up.

Europe's AI paradox is record adoption that funds foreign ecosystems instead of building its own

Europe leads in AI adoption and matches the US in talent, but owns almost none of the platforms it depends on. A new report by Prosus and Dealroom lays out where the disconnect starts: from missing infrastructure and fragmented regulation to a funding gap that hands Europe’s best startups to American investors. Closing that gap won’t be easy.

Pentagon plans to let AI companies train models on classified data

The US Department of War is working to set up secure environments where AI companies can train their models on classified data. Until now, models were only allowed to read classified data, not learn from it.

Beijing approves Nvidia's H200 chip sales as the company builds a China-ready version of its Groq inference chip

Nvidia has received long-awaited approval from Beijing to sell its second-most-powerful AI chip, the H200, to Chinese customers, Reuters reports. The company had halted production of the chip last year due to regulatory hurdles on both sides of the Pacific.

OpenAI's own wellbeing advisors warned against erotic mode, called it a "sexy suicide coach"

OpenAI’s wellbeing advisory board reportedly voted unanimously against the company’s planned Adult Mode for ChatGPT. Internally, the company is struggling with an error-prone age detection system and unresolved safety issues.

Read full article about: US War Department CTO says Anthropic's AI models "pollute" the supply chain with built-in ethics

Emil Michael, the US Department of War's chief technology officer, made clear that classifying Anthropic as a supply chain risk is an ideologically motivated move. Claude models "pollute" the supply chain because they have a "different policy preference" baked into them, Michael told CNBC. He pointed to Anthropic's "constitution," a ruleset emphasizing ethics and safety, which he said could result in soldiers receiving "ineffective weapons, ineffective body armor, ineffective protection." The measure was "not meant to be punitive," he added.

Anthropic is the first US company to receive this classification, which is normally reserved for foreign adversaries. The AI company is suing over the designation and has drawn support from Microsoft, OpenAI, and Google employees, as well as former US military personnel. Anthropic has previously pushed back against its own AI models being used for US mass surveillance and autonomous weapons.

The administration has already signaled its intent to control AI along ideological lines by enacting regulations targeting so-called "woke AI," framed as a commitment to political neutrality. The approach echoes the Chinese government's own efforts to exert political control over AI models.

Comment Source: CNBC