EU bars AI-generated content from official communications, according to Politico
Politico reports that the Commission, Parliament, and Council have barred their press teams from using fully AI-generated content. Experts see a missed opportunity.
Politico reports that the Commission, Parliament, and Council have barred their press teams from using fully AI-generated content. Experts see a missed opportunity.
Perplexity AI is facing a class-action lawsuit. The company is accused of sharing personal user data from chats with Meta and Google, Bloomberg reports. The lawsuit was filed Tuesday in federal court in San Francisco.
According to the complaint, trackers are downloaded onto users' devices as soon as they log into Perplexity's home page. That is not unusual for many websites. What makes the allegation serious is the further claim: the trackers allegedly give Meta and Google access to conversations with the AI search engine. According to the lawsuit, this also applies when users enable "Incognito" mode.
The suit was filed on behalf of a man from Utah who says he shared financial and tax information with the chatbot. If certified, additional plaintiffs may join. Meta pointed to its policies, which prohibit advertisers from submitting sensitive data. Perplexity spokesperson Jesse Dwyer said the company has not been served with any such lawsuit. Google did not immediately comment.
California Governor Gavin Newsom signed an executive order on Monday requiring companies with state contracts to implement safeguards against AI misuse. Specifically, companies must ensure their AI systems don't generate illegal content, reinforce harmful biases, or violate civil rights. To prevent misinformation, state agencies will also be required to watermark AI-generated images and videos.
The order includes a separate provision for handling federal directives: if the U.S. federal government designates a company as a supply chain risk, California will conduct its own review and potentially continue working with that vendor. This comes in the wake of the Pentagon's designation of Anthropic as a supply chain risk, which bars government contractors from using Anthropic's technology for U.S. military work.
Within 120 days, California's procurement and technology agencies are expected to develop recommendations for new AI certifications. These would let companies demonstrate compliance with responsible AI practices and public safety protections.
The executive order reinforces California's push to chart its own course on AI regulation, independent of the Trump administration, which has repeatedly tried to block independent state-level AI laws.
AI-generated dating show "Fruit Love Island" averages over 10 million views per episode on TikTok.
The show features fruit characters flirting, fighting, and cheating on each other in a villa modeled after the real "Love Island" series. Since launching last week, 21 episodes have been published. Viewers can vote on what happens next through an online form.
Justine Moore of Andreessen Horowitz sees the show as proof that AI-generated content can attract a mass audience, according to the Wall Street Journal. Despite obvious AI flaws like out-of-sync lip movements, the show has built a real following. Fans have already created recap videos, fan accounts, and parodies. It's fitting that the reality dating format - already a low-effort genre on television - is now being replicated by AI. Maybe AI slop is just the natural successor to trash TV.
AI models tell people what they want to hear nearly 50 percent more often than other humans do. A new Science study shows this isn’t just annoying: it makes people less willing to apologize, less likely to see the other side, and more convinced they’re right. The worst part: users love it.
Anthropic grew out of more than just concern for AI safety—it was born from a bitter power struggle and personal conflict at OpenAI. A report by Sam Altman biographer Keach Hagey reveals how personal slights, rivalries, and strategic disagreements led to what may be the most consequential split in the AI industry.
Anthropic has secured a preliminary injunction against the Trump administration in a federal court in San Francisco. Judge Rita Lin temporarily blocked President Trump's order banning federal agencies from using Anthropic's AI models, along with the Pentagon's classification of the company as a security risk.
Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. [...] Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.
Rita F. Lin, United States District Judge
The dispute traces back to a failed $200 million contract. The Pentagon wanted unrestricted access to Anthropic's Claude models, but Anthropic insisted on guarantees that the models wouldn't be used for autonomous weapons or mass surveillance. Defense Secretary Pete Hegseth then classified Anthropic as a "supply chain risk" - making it the first U.S. company to receive that designation. A final ruling is still pending.
Meta’s Oversight Board has examined the planned global expansion of Community Notes. Its conclusion: the system is too slow, too thinly staffed, and vulnerable to manipulation, especially given the growing flood of AI-generated disinformation. In certain countries, Meta should not introduce the program at all.
OpenAI has put development of an erotic chatbot on hold indefinitely, the Financial Times reports. The decision comes after employees and investors raised concerns about the societal impact of sexual AI content. OpenAI's well-being advisory board had already unanimously opposed the planned "Adult Mode," with one board member warning that OpenAI risked creating a "sexy suicide coach." The company is also dealing with technical problems - its age verification system misidentified minors as adults in roughly 12 percent of cases. With 100 million underage users per week, that's a significant gap.
The AI company, currently valued at $730 billion, now wants to wait for long-term research on the effects of sexually explicit chats and emotional attachments before moving forward. According to the FT, there have already been internal discussions about scrapping the project entirely. Investors saw a poor risk-reward ratio, and employees questioned whether the project aligned with OpenAI's mission.
In ChatGPT's app code, the project appears under the name "Citron Mode," with planned age verification for users 18 and older. OpenAI is now shifting its focus to productivity tools and a "super app" built around ChatGPT.