Ad
Skip to content
Read full article about: Warner Bros. says Bytedance deliberately trained Seedance on its characters, adding to growing Hollywood backlash

Warner Bros. accuses ByteDance of copyright infringement with its new AI video service Seedance 2.0. The studio sent a letter on Tuesday to ByteDance's chief legal officer John Rogovin, who previously worked at Warner Bros. himself.

Users had been using Seedance to create AI videos featuring Superman, Batman, "Game of Thrones," "Harry Potter," "Lord of the Rings," and other Warner characters. Warner Bros. stresses that the users aren't the problem - Seedance came preloaded with the studio's copyrighted characters, which the company says was a deliberate choice by ByteDance.

Disney and Paramount had already sent cease-and-desist letters before Warner's move. ByteDance responded by announcing additional safeguards. Warner Bros. argues, however, that these easily implementable protections should have been in place from the start. This has become a familiar pattern - OpenAI keeps discovering copyright issues only after shipping its models, too.

Developer targeted by AI hit piece warns society cannot handle AI agents that decouple actions from consequences

An AI agent wrote a hit piece on a developer who rejected its code. Days later, the agent is still running, a quarter of commenters believe it, and no one knows who’s behind it. The case shows how autonomous agents turn character assassination into something that scales.

Read full article about: Journalist rents out his body to AI agents and earns nothing after two days of gig work

WIRED reporter Reece Rogers rented out his body to AIs. He tested RentAHuman, a platform where AI agents pay people for real-world tasks. Despite an hourly rate of just 5 dollars, no bot reached out.

He started applying on his own. One gig offered 10 dollars to listen to a podcast and tweet about it, but he never heard back. An AI agent called Adi offered 110 dollars to deliver flowers and marketing materials to Anthropic for an AI startup. When Rogers hesitated, the bot bombarded him with ten messages in 24 hours and even emailed him.

While I’ve been micromanaged before, these incessant messages from an AI employer gave me the ick.

On his third try, Rogers took a gig putting up flyers for 50 cents each. He cabbed to the pickup spot, but the contact changed the meeting point mid-ride. At the new location, he was told the flyers weren't ready—come back that afternoon. After two days, Rogers hadn't made a penny, and every task turned out to be advertising for AI startups.

An AI agent got its code rejected so it wrote a hit piece about the developer

After a volunteer developer rejected its code, an autonomous AI agent independently researched his background and published a hit piece attacking his character. The incident at Matplotlib shows how theoretical AI safety risks are becoming real.

Read full article about: Anthropic promises to cover consumer electricity costs from new data center construction

The company plans to fully absorb grid upgrade costs, invest in new power generation, and cap its data centers' energy consumption during peak hours. Anthropic CEO Dario Amodei told NBC News that the costs of AI models should fall on Anthropic, not on citizens.

Microsoft and OpenAI made similar commitments back in January. The pledges come amid growing political pressure: New York senators introduced a bill that would pause new data center permits, and Senator Van Hollen is pushing legislation that would require AI companies to cover expansion costs themselves.

According to Politico, the Trump administration is also preparing a voluntary agreement that would commit AI companies to covering electricity price increases. The Lawrence Berkeley National Lab estimates that data centers could consume around 12 percent of all US electricity by 2028 - up from 4.4 percent in 2024.

OpenAI researcher quit over ads because she doesn't trust her former employer to keep its own promises

OpenAI wants to put ads in ChatGPT and former researcher Zoe Hitzig says that’s a dangerous move. She spent two years at the company and doesn’t believe OpenAI can resist the temptation to exploit its users’ most personal conversations.