EU bars AI-generated content from official communications, according to Politico
Politico reports that the Commission, Parliament, and Council have barred their press teams from using fully AI-generated content. Experts see a missed opportunity.
Politico reports that the Commission, Parliament, and Council have barred their press teams from using fully AI-generated content. Experts see a missed opportunity.
Perplexity AI is facing a class-action lawsuit. The company is accused of sharing personal user data from chats with Meta and Google, Bloomberg reports. The lawsuit was filed Tuesday in federal court in San Francisco.
According to the complaint, trackers are downloaded onto users' devices as soon as they log into Perplexity's home page. That is not unusual for many websites. What makes the allegation serious is the further claim: the trackers allegedly give Meta and Google access to conversations with the AI search engine. According to the lawsuit, this also applies when users enable "Incognito" mode.
The suit was filed on behalf of a man from Utah who says he shared financial and tax information with the chatbot. If certified, additional plaintiffs may join. Meta pointed to its policies, which prohibit advertisers from submitting sensitive data. Perplexity spokesperson Jesse Dwyer said the company has not been served with any such lawsuit. Google did not immediately comment.
AI infrastructure company Nebius Group is building a 310-megawatt data center in Lappeenranta, Finland, close to the Russian border. The project is valued at over $10 billion and would become one of the largest AI data centers in Europe. Finnish developer Polarnode is already constructing the facility, with a phased launch planned starting in 2027.
Nebius recently signed contracts totaling more than $40 billion with Microsoft and Meta. The new data center will train AI models and run AI applications but isn't tied to a single customer. Nebius picked Finland for its low energy prices, renewable power, and cool climate, all of which help cut cooling costs. The facility would be the company's largest site outside the US and is expected to cover roughly 10 percent of Nebius' total planned capacity, according to CEO Arkady Volozh.
California Governor Gavin Newsom signed an executive order on Monday requiring companies with state contracts to implement safeguards against AI misuse. Specifically, companies must ensure their AI systems don't generate illegal content, reinforce harmful biases, or violate civil rights. To prevent misinformation, state agencies will also be required to watermark AI-generated images and videos.
The order includes a separate provision for handling federal directives: if the U.S. federal government designates a company as a supply chain risk, California will conduct its own review and potentially continue working with that vendor. This comes in the wake of the Pentagon's designation of Anthropic as a supply chain risk, which bars government contractors from using Anthropic's technology for U.S. military work.
Within 120 days, California's procurement and technology agencies are expected to develop recommendations for new AI certifications. These would let companies demonstrate compliance with responsible AI practices and public safety protections.
The executive order reinforces California's push to chart its own course on AI regulation, independent of the Trump administration, which has repeatedly tried to block independent state-level AI laws.
AI-generated dating show "Fruit Love Island" averages over 10 million views per episode on TikTok.
The show features fruit characters flirting, fighting, and cheating on each other in a villa modeled after the real "Love Island" series. Since launching last week, 21 episodes have been published. Viewers can vote on what happens next through an online form.
Justine Moore of Andreessen Horowitz sees the show as proof that AI-generated content can attract a mass audience, according to the Wall Street Journal. Despite obvious AI flaws like out-of-sync lip movements, the show has built a real following. Fans have already created recap videos, fan accounts, and parodies. It's fitting that the reality dating format - already a low-effort genre on television - is now being replicated by AI. Maybe AI slop is just the natural successor to trash TV.
Meta’s Oversight Board has examined the planned global expansion of Community Notes. Its conclusion: the system is too slow, too thinly staffed, and vulnerable to manipulation, especially given the growing flood of AI-generated disinformation. In certain countries, Meta should not introduce the program at all.
OpenAI has put development of an erotic chatbot on hold indefinitely, the Financial Times reports. The decision comes after employees and investors raised concerns about the societal impact of sexual AI content. OpenAI's well-being advisory board had already unanimously opposed the planned "Adult Mode," with one board member warning that OpenAI risked creating a "sexy suicide coach." The company is also dealing with technical problems - its age verification system misidentified minors as adults in roughly 12 percent of cases. With 100 million underage users per week, that's a significant gap.
The AI company, currently valued at $730 billion, now wants to wait for long-term research on the effects of sexually explicit chats and emotional attachments before moving forward. According to the FT, there have already been internal discussions about scrapping the project entirely. Investors saw a poor risk-reward ratio, and employees questioned whether the project aligned with OpenAI's mission.
In ChatGPT's app code, the project appears under the name "Citron Mode," with planned age verification for users 18 and older. OpenAI is now shifting its focus to productivity tools and a "super app" built around ChatGPT.