Ad
Skip to content
Read full article about: AI-generated dating show pulls 10 million views per episode on TikTok

AI-generated dating show "Fruit Love Island" averages over 10 million views per episode on TikTok.

The show features fruit characters flirting, fighting, and cheating on each other in a villa modeled after the real "Love Island" series. Since launching last week, 21 episodes have been published. Viewers can vote on what happens next through an online form.

Justine Moore of Andreessen Horowitz sees the show as proof that AI-generated content can attract a mass audience, according to the Wall Street Journal. Despite obvious AI flaws like out-of-sync lip movements, the show has built a real following. Fans have already created recap videos, fan accounts, and parodies. It's fitting that the reality dating format - already a low-effort genre on television - is now being replicated by AI. Maybe AI slop is just the natural successor to trash TV.

Comment Source: WSJ

AI sycophancy makes people less likely to apologize and more likely to double down, study finds

AI models tell people what they want to hear nearly 50 percent more often than other humans do. A new Science study shows this isn’t just annoying: it makes people less willing to apologize, less likely to see the other side, and more convinced they’re right. The worst part: users love it.

Anthropic reportedly views itself as the antidote to OpenAI's "tobacco industry" approach to AI

Anthropic grew out of more than just concern for AI safety—it was born from a bitter power struggle and personal conflict at OpenAI. A report by Sam Altman biographer Keach Hagey reveals how personal slights, rivalries, and strategic disagreements led to what may be the most consequential split in the AI industry.

Read full article about: Federal judge blocks Trump's ban on Anthropic AI models, calls security risk label "Orwellian"

Anthropic has secured a preliminary injunction against the Trump administration in a federal court in San Francisco. Judge Rita Lin temporarily blocked President Trump's order banning federal agencies from using Anthropic's AI models, along with the Pentagon's classification of the company as a security risk.

Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. [...] Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.

Rita F. Lin, United States District Judge

The dispute traces back to a failed $200 million contract. The Pentagon wanted unrestricted access to Anthropic's Claude models, but Anthropic insisted on guarantees that the models wouldn't be used for autonomous weapons or mass surveillance. Defense Secretary Pete Hegseth then classified Anthropic as a "supply chain risk" - making it the first U.S. company to receive that designation. A final ruling is still pending.

Meta's own supervisory body warns that Community Notes are no match for AI disinformation

Meta’s Oversight Board has examined the planned global expansion of Community Notes. Its conclusion: the system is too slow, too thinly staffed, and vulnerable to manipulation, especially given the growing flood of AI-generated disinformation. In certain countries, Meta should not introduce the program at all.

Read full article about: OpenAI halts "Adult Mode" as advisors, investors, and employees raise red flags

OpenAI has put development of an erotic chatbot on hold indefinitely, the Financial Times reports. The decision comes after employees and investors raised concerns about the societal impact of sexual AI content. OpenAI's well-being advisory board had already unanimously opposed the planned "Adult Mode," with one board member warning that OpenAI risked creating a "sexy suicide coach." The company is also dealing with technical problems - its age verification system misidentified minors as adults in roughly 12 percent of cases. With 100 million underage users per week, that's a significant gap.

The AI company, currently valued at $730 billion, now wants to wait for long-term research on the effects of sexually explicit chats and emotional attachments before moving forward. According to the FT, there have already been internal discussions about scrapping the project entirely. Investors saw a poor risk-reward ratio, and employees questioned whether the project aligned with OpenAI's mission.

In ChatGPT's app code, the project appears under the name "Citron Mode," with planned age verification for users 18 and older. OpenAI is now shifting its focus to productivity tools and a "super app" built around ChatGPT.

Comment Source: FT
Read full article about: A man created thousands of fake accounts to stream AI songs billions of times and pocket $8 million in royalties

A North Carolina man has pleaded guilty to defrauding music streaming platforms. Michael Smith generated hundreds of thousands of AI songs and used bots to play them billions of times, pocketing more than eight million dollars in royalties. To pull it off, he created thousands of fake accounts on Spotify, Apple Music, Amazon Music, and YouTube Music, carefully spreading streams across enough songs to stay under the radar.

Smith pleaded guilty to conspiracy to commit wire fraud, according to the US Attorney's Office for the Southern District of New York.

The scheme did double damage. Streaming platforms paid out money for plays that never had a real listener, and since royalties come from a shared pool distributed on a pro rata basis, every fake stream meant less money for actual musicians and songwriters. "Smith's brazen scheme is over, as he stands convicted of a federal crime for his AI-assisted fraud," US Attorney Jay Clayton said.

Read full article about: Microsoft snaps up Texas data center that Oracle and OpenAI left behind

Microsoft has agreed to lease a data center in Abilene, Texas, that was originally built for Oracle and OpenAI, Bloomberg News reports. The facility offers roughly 700 megawatts of capacity and sits right next to the Stargate campus - Oracle and OpenAI's flagship AI infrastructure project.

Microsoft struck the deal with developer Crusoe after both Oracle and OpenAI walked away from negotiations over the site. Back in March, Bloomberg reported that Oracle and OpenAI had abandoned their expansion plans in Texas because financing talks stalled and OpenAI's needs had shifted. Oracle pushed back on those reports at the time, calling claims of delays at the Abilene site inaccurate.

Microsoft, Oracle, OpenAI, and Crusoe have not commented on the new report, according to Reuters.

The lease aligns with Microsoft's broader push to expand its own computing infrastructure. In a recent podcast, Microsoft CEO Satya Nadella said he expects an oversupply of computing capacity and falling prices by 2027 or 2028 as a result of the current data center building boom. Nadella added that he's looking forward to renting capacity cheaply when that happens.

Read full article about: OpenAI wants UK regulators to treat ChatGPT as a Google Search alternative

OpenAI is pushing the UK's Competition and Markets Authority (CMA) to add ChatGPT as an alternative to Google in so-called "choice screens" on Android phones and the Chrome browser. The CMA had previously designated Google as holding "strategic market status" in search and proposed giving users regular alternatives to choose from.

OpenAI argues, according to the Telegraph, that AI chatbots with search functionality should count as search engines, since users increasingly turn to them for queries. ChatGPT has offered web search since 2024 and now has around 900 million weekly users.

Google pushed back, calling the proposed pop-ups disruptive for users. Despite growing AI competition, Google's search revenues climbed 16 percent last year to $63 billion. Google's own AI system Gemini is also growing rapidly and competes directly with ChatGPT.