Ad
Skip to content

Someone threw a Molotov cocktail at OpenAI CEO Sam Altman's home in the middle of the night

Someone threw a Molotov cocktail at OpenAI CEO Sam Altman’s home at 3:45 a.m. In response, Altman published a personal blog post admitting past mistakes and comparing the AI industry’s power struggles to the “Ring of Power.”

Read full article about: CIA plans to integrate AI assistants into all analysis platforms

According to CIA Deputy Director Michael Ellis, the agency recently produced its first fully autonomous intelligence report using AI, Politico reports. Over the next few years, AI assistants will be integrated into all of the agency's analysis platforms. These tools are meant to help analysts with tasks like drafting assessments, verifying findings, and identifying trends.

Ellis stressed that humans will continue to make the important decisions. The CIA tested 300 AI projects over the past year, covering areas like data processing and language translation. The agency's expanded Center for Cyber Intelligence, which oversees the CIA's covert hacking operations, is also set to make greater use of AI and emerging technologies.

Ellis also took an indirect shot at Anthropic, saying the CIA won't let private companies dictate how it uses their technology. Anthropic is currently in a dispute with the Pentagon after the company tried to contractually restrict its models from being used for lethal strikes and mass surveillance. The Pentagon has since classified Anthropic as a supply chain risk. Ellis also warned that China has made significant technological gains.

Read full article about: Coreweave signs multi-year cloud deal with Anthropic to power Claude

Coreweave has signed a multi-year cloud deal with AI startup Anthropic. The agreement will provide Anthropic with compute capacity for its Claude model family starting later this year. Financial details were not disclosed. Coreweave's stock rose more than 5 percent in premarket trading. The buildout will happen in phases, with the option to expand later.

For Coreweave, the Anthropic deal is part of a string of major contracts: last year, the company signed an $11.9 billion deal with OpenAI, followed by a $6.3 billion order with Nvidia in September, and just the day before, an expanded $21 billion deal with Meta. The Anthropic contract helps Coreweave diversify its revenue - until now, around 67 percent of its income came from Microsoft. Coreweave's stock is up about 29 percent year to date.

Read full article about: US appeals court refuses to block Pentagon's blacklisting of Anthropic

A US appeals court has declined to temporarily block the Pentagon's designation of Anthropic as a national security risk, Reuters reports. The ruling came Wednesday in Washington, D.C. Defense Secretary Pete Hegseth had placed the AI company on a blacklist after Anthropic refused to lift usage restrictions on its AI assistant Claude for surveillance and autonomous weapons.

Anthropic calls the move retaliation for its stance on AI safety and warns of billions in damages. The Justice Department says the decision was based on contract terms.

A California court had ruled in Anthropic's favor in a parallel case in late March. It marks the first time a US company has been publicly designated as a supply chain risk. A final ruling is still pending.

Read full article about: China actively targeting Taiwan's chip talent and technology, security report says

China is actively trying to poach Taiwan's semiconductor expertise and talent to circumvent international technology restrictions, according to a report from Taiwan's National Security Bureau cited by Reuters.

The report says China is using indirect channels to recruit talent, steal technology, and acquire controlled goods. Taiwan is home to TSMC, the world's largest contract chipmaker and a key supplier to Nvidia and Apple.

In the first quarter of 2026 alone, the report logged more than 170 million attempted cyberattacks on Taiwan's government network. The agency also warns that China could try to influence Taiwan's local elections later this year using deepfakes and fabricated polls.

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

“My vibes don’t really fit.” In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.