Ad
Skip to content
Read full article about: US appeals court refuses to block Pentagon's blacklisting of Anthropic

A US appeals court has declined to temporarily block the Pentagon's designation of Anthropic as a national security risk, Reuters reports. The ruling came Wednesday in Washington, D.C. Defense Secretary Pete Hegseth had placed the AI company on a blacklist after Anthropic refused to lift usage restrictions on its AI assistant Claude for surveillance and autonomous weapons.

Anthropic calls the move retaliation for its stance on AI safety and warns of billions in damages. The Justice Department says the decision was based on contract terms.

A California court had ruled in Anthropic's favor in a parallel case in late March. It marks the first time a US company has been publicly designated as a supply chain risk. A final ruling is still pending.

Read full article about: China actively targeting Taiwan's chip talent and technology, security report says

China is actively trying to poach Taiwan's semiconductor expertise and talent to circumvent international technology restrictions, according to a report from Taiwan's National Security Bureau cited by Reuters.

The report says China is using indirect channels to recruit talent, steal technology, and acquire controlled goods. Taiwan is home to TSMC, the world's largest contract chipmaker and a key supplier to Nvidia and Apple.

In the first quarter of 2026 alone, the report logged more than 170 million attempted cyberattacks on Taiwan's government network. The agency also warns that China could try to influence Taiwan's local elections later this year using deepfakes and fabricated polls.

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

“My vibes don’t really fit.” In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.

OpenAI decides the best way to fight critical AI coverage is to own a newsroom

OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI’s communications department. That’s as contradictory as it sounds. So what’s OpenAI really after?

Read full article about: Perplexity AI sued over alleged data sharing with Meta and Google

Perplexity AI is facing a class-action lawsuit. The company is accused of sharing personal user data from chats with Meta and Google, Bloomberg reports. The lawsuit was filed Tuesday in federal court in San Francisco.

According to the complaint, trackers are downloaded onto users' devices as soon as they log into Perplexity's home page. That is not unusual for many websites. What makes the allegation serious is the further claim: the trackers allegedly give Meta and Google access to conversations with the AI search engine. According to the lawsuit, this also applies when users enable "Incognito" mode.

The suit was filed on behalf of a man from Utah who says he shared financial and tax information with the chatbot. If certified, additional plaintiffs may join. Meta pointed to its policies, which prohibit advertisers from submitting sensitive data. Perplexity spokesperson Jesse Dwyer said the company has not been served with any such lawsuit. Google did not immediately comment.

Read full article about: California sets its own AI rules for state contractors, pushing back against federal policy

California Governor Gavin Newsom signed an executive order on Monday requiring companies with state contracts to implement safeguards against AI misuse. Specifically, companies must ensure their AI systems don't generate illegal content, reinforce harmful biases, or violate civil rights. To prevent misinformation, state agencies will also be required to watermark AI-generated images and videos.

The order includes a separate provision for handling federal directives: if the U.S. federal government designates a company as a supply chain risk, California will conduct its own review and potentially continue working with that vendor. This comes in the wake of the Pentagon's designation of Anthropic as a supply chain risk, which bars government contractors from using Anthropic's technology for U.S. military work.

Within 120 days, California's procurement and technology agencies are expected to develop recommendations for new AI certifications. These would let companies demonstrate compliance with responsible AI practices and public safety protections.

The executive order reinforces California's push to chart its own course on AI regulation, independent of the Trump administration, which has repeatedly tried to block independent state-level AI laws.