OpenAI decides the best way to fight critical AI coverage is to own a newsroom
OpenAI has acquired tech talk show TBPN. The show will supposedly remain editorially independent but report to OpenAI’s communications department. That’s as contradictory as it sounds. So what’s OpenAI really after?
EU bars AI-generated content from official communications, according to Politico
Politico reports that the Commission, Parliament, and Council have barred their press teams from using fully AI-generated content. Experts see a missed opportunity.
Read full article about: Perplexity AI sued over alleged data sharing with Meta and Google
Perplexity AI is facing a class-action lawsuit. The company is accused of sharing personal user data from chats with Meta and Google, Bloomberg reports. The lawsuit was filed Tuesday in federal court in San Francisco.
According to the complaint, trackers are downloaded onto users' devices as soon as they log into Perplexity's home page. That is not unusual for many websites. What makes the allegation serious is the further claim: the trackers allegedly give Meta and Google access to conversations with the AI search engine. According to the lawsuit, this also applies when users enable "Incognito" mode.
The suit was filed on behalf of a man from Utah who says he shared financial and tax information with the chatbot. If certified, additional plaintiffs may join. Meta pointed to its policies, which prohibit advertisers from submitting sensitive data. Perplexity spokesperson Jesse Dwyer said the company has not been served with any such lawsuit. Google did not immediately comment.
Read full article about: California sets its own AI rules for state contractors, pushing back against federal policy
California Governor Gavin Newsom signed an executive order on Monday requiring companies with state contracts to implement safeguards against AI misuse. Specifically, companies must ensure their AI systems don't generate illegal content, reinforce harmful biases, or violate civil rights. To prevent misinformation, state agencies will also be required to watermark AI-generated images and videos.
The order includes a separate provision for handling federal directives: if the U.S. federal government designates a company as a supply chain risk, California will conduct its own review and potentially continue working with that vendor. This comes in the wake of the Pentagon's designation of Anthropic as a supply chain risk, which bars government contractors from using Anthropic's technology for U.S. military work.
Within 120 days, California's procurement and technology agencies are expected to develop recommendations for new AI certifications. These would let companies demonstrate compliance with responsible AI practices and public safety protections.
The executive order reinforces California's push to chart its own course on AI regulation, independent of the Trump administration, which has repeatedly tried to block independent state-level AI laws.
Read full article about: AI-generated dating show pulls 10 million views per episode on TikTok
AI-generated dating show "Fruit Love Island" averages over 10 million views per episode on TikTok.
The show features fruit characters flirting, fighting, and cheating on each other in a villa modeled after the real "Love Island" series. Since launching last week, 21 episodes have been published. Viewers can vote on what happens next through an online form.
Justine Moore of Andreessen Horowitz sees the show as proof that AI-generated content can attract a mass audience, according to the Wall Street Journal. Despite obvious AI flaws like out-of-sync lip movements, the show has built a real following. Fans have already created recap videos, fan accounts, and parodies. It's fitting that the reality dating format - already a low-effort genre on television - is now being replicated by AI. Maybe AI slop is just the natural successor to trash TV.
AI sycophancy makes people less likely to apologize and more likely to double down, study finds
AI models tell people what they want to hear nearly 50 percent more often than other humans do. A new Science study shows this isn’t just annoying: it makes people less willing to apologize, less likely to see the other side, and more convinced they’re right. The worst part: users love it.
Anthropic reportedly views itself as the antidote to OpenAI's "tobacco industry" approach to AI
Anthropic grew out of more than just concern for AI safety—it was born from a bitter power struggle and personal conflict at OpenAI. A report by Sam Altman biographer Keach Hagey reveals how personal slights, rivalries, and strategic disagreements led to what may be the most consequential split in the AI industry.
Read full article about: Federal judge blocks Trump's ban on Anthropic AI models, calls security risk label "Orwellian"
Anthropic has secured a preliminary injunction against the Trump administration in a federal court in San Francisco. Judge Rita Lin temporarily blocked President Trump's order banning federal agencies from using Anthropic's AI models, along with the Pentagon's classification of the company as a security risk.
Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. [...] Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.
Rita F. Lin, United States District Judge
The dispute traces back to a failed $200 million contract. The Pentagon wanted unrestricted access to Anthropic's Claude models, but Anthropic insisted on guarantees that the models wouldn't be used for autonomous weapons or mass surveillance. Defense Secretary Pete Hegseth then classified Anthropic as a "supply chain risk" - making it the first U.S. company to receive that designation. A final ruling is still pending.