Maximilian Schreiner
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Read full article about: AI-generated dating show pulls 10 million views per episode on TikTok
AI-generated dating show "Fruit Love Island" averages over 10 million views per episode on TikTok.
The show features fruit characters flirting, fighting, and cheating on each other in a villa modeled after the real "Love Island" series. Since launching last week, 21 episodes have been published. Viewers can vote on what happens next through an online form.
Justine Moore of Andreessen Horowitz sees the show as proof that AI-generated content can attract a mass audience, according to the Wall Street Journal. Despite obvious AI flaws like out-of-sync lip movements, the show has built a real following. Fans have already created recap videos, fan accounts, and parodies. It's fitting that the reality dating format - already a low-effort genre on television - is now being replicated by AI. Maybe AI slop is just the natural successor to trash TV.
Meta's own supervisory body warns that Community Notes are no match for AI disinformation
Meta’s Oversight Board has examined the planned global expansion of Community Notes. Its conclusion: the system is too slow, too thinly staffed, and vulnerable to manipulation, especially given the growing flood of AI-generated disinformation. In certain countries, Meta should not introduce the program at all.
Read full article about: OpenAI halts "Adult Mode" as advisors, investors, and employees raise red flags
OpenAI has put development of an erotic chatbot on hold indefinitely, the Financial Times reports. The decision comes after employees and investors raised concerns about the societal impact of sexual AI content. OpenAI's well-being advisory board had already unanimously opposed the planned "Adult Mode," with one board member warning that OpenAI risked creating a "sexy suicide coach." The company is also dealing with technical problems - its age verification system misidentified minors as adults in roughly 12 percent of cases. With 100 million underage users per week, that's a significant gap.
The AI company, currently valued at $730 billion, now wants to wait for long-term research on the effects of sexually explicit chats and emotional attachments before moving forward. According to the FT, there have already been internal discussions about scrapping the project entirely. Investors saw a poor risk-reward ratio, and employees questioned whether the project aligned with OpenAI's mission.
In ChatGPT's app code, the project appears under the name "Citron Mode," with planned age verification for users 18 and older. OpenAI is now shifting its focus to productivity tools and a "super app" built around ChatGPT.
Read full article about: GitHub will use Copilot interaction data to train AI models starting April 2026
Starting April 24, 2026, GitHub is changing its data policy for Copilot. Interaction data from users on the Free, Pro, and Pro+ plans will be used to train AI models unless users actively opt out. This includes prompts, outputs, code snippets, filenames, repository structures, and feedback.
Users who previously opted out will keep their existing settings. Copilot Business and Enterprise customers are not affected. GitHub's chief product officer Mario Rodriguez says real-world usage data improves the models. Internal testing with data from Microsoft employees already led to higher acceptance rates.
The data can be shared with Microsoft, but not with third-party AI model providers. Users who want to opt out can do so in their Copilot settings under "Privacy." More details are available on the GitHub blog.
Read full article about: Arm breaks from its licensing-only model with first in-house chip built for AI data centers
For the first time in its 35-year history, Arm has manufactured its own chip, expanding beyond its long-standing business model of licensing chip designs to companies like Apple and Nvidia. The new CPU, called "Arm AGI," was developed in partnership with Meta and is designed to handle AI workloads in data centers.
The chip packs up to 136 cores, runs at up to 3.7 GHz, and is built on TSMC's 3nm process. According to Arm CEO Rene Haas, the chip is meant to deliver high-performance, energy-efficient computing for AI infrastructure. Meta plans to pair the CPU with its own MTIA accelerator, as Meta's head of infrastructure, Santosh Janardhan, explained.
Other partners include OpenAI, Cerebras, Cloudflare, and Lenovo. First systems are already available, with broader availability expected in the second half of 2026.
Claude Code's new Auto Mode tries to balance safety and speed
Developers using Claude Code have faced an awkward choice: approve every single action manually or turn off all safety checks entirely. Anthropic’s new Auto Mode aims to offer a middle ground.
Read full article about: Microsoft snaps up Texas data center that Oracle and OpenAI left behind
Microsoft has agreed to lease a data center in Abilene, Texas, that was originally built for Oracle and OpenAI, Bloomberg News reports. The facility offers roughly 700 megawatts of capacity and sits right next to the Stargate campus - Oracle and OpenAI's flagship AI infrastructure project.
Microsoft struck the deal with developer Crusoe after both Oracle and OpenAI walked away from negotiations over the site. Back in March, Bloomberg reported that Oracle and OpenAI had abandoned their expansion plans in Texas because financing talks stalled and OpenAI's needs had shifted. Oracle pushed back on those reports at the time, calling claims of delays at the Abilene site inaccurate.
Microsoft, Oracle, OpenAI, and Crusoe have not commented on the new report, according to Reuters.
The lease aligns with Microsoft's broader push to expand its own computing infrastructure. In a recent podcast, Microsoft CEO Satya Nadella said he expects an oversupply of computing capacity and falling prices by 2027 or 2028 as a result of the current data center building boom. Nadella added that he's looking forward to renting capacity cheaply when that happens.
Read full article about: Google brings AI-powered dark web analysis to enterprise security teams
Google Cloud unveiled new security features at the RSA Conference 2026 in San Francisco. The centerpiece is an AI agent called "Triage and Investigation" built for enterprise security teams and embedded in Google's "Security Operations" platform. The agent reviews security alerts on its own, automatically pulls in additional data and context, and assesses whether an alert represents a real threat or a false alarm. The goal is to help analysts in SOCs (Security Operations Centers - the security hubs of organizations) spend less time chasing false positives.
According to the new M-Trends report from Mandiant, Google's cybersecurity subsidiary, cybercriminals are becoming increasingly professional and organized. They're forming partnerships and deliberately destroying their victims' ability to recover, maximizing extortion pressure. The window between initial intrusion and attack has shrunk to just 22 seconds. A separate Mandiant report shows that attackers are now using AI tools that adapt in real time during an attack to evade security systems.
Google is also rolling out a new AI-powered dark web analysis tool. It automatically evaluates activity in hidden parts of the internet - things like forum posts and marketplaces where stolen data is traded. According to internal tests, the system can filter millions of these activities per day with 98 percent accuracy, flagging only genuinely relevant threats.