Ad
Skip to content
Read full article about: Google locks in new energy reserves for its AI expansion

Google is ramping up its AI infrastructure with a major energy acquisition. Parent company Alphabet is buying clean energy developer Intersect for $4.75 billion in cash, plus assumed debt.

Alphabet is acquiring Intersect's energy and data center projects that are currently under development or construction. The company holds assets worth $15 billion. By 2028, projects with roughly 10.8 gigawatts of capacity should be online—more than twenty times the electricity output of the Hoover Dam, as Reuters reports. Intersect will continue to operate separately from Alphabet. Existing plants in Texas and California aren't part of the deal.

The deal reflects a broader trend: big tech companies are pouring money into energy assets as US power grids struggle to keep pace with soaring electricity demand from artificial intelligence. Google says it plans to double its AI capacity every six months, aiming for a thousandfold increase in output within four to five years. To hit those targets, Google is also investing in advanced reactor technology.

Ad
Read full article about: OpenAI reportedly dramatically improved its compute profit margins

OpenAI has reportedly made major strides in improving the profitability of its AI services. The company's compute margin—the share of revenue left after paying for server costs from paying users—jumped from around 35 percent in January 2024 to roughly 70 percent by October 2025, according to internal financial data obtained by The Information. For comparison, Anthropic is expected to reach 53 percent by year's end.

OpenAI achieved these gains by cutting rental costs for computing power, optimizing its models, and launching a pricier subscription tier. Still, the company has a long road ahead before reaching profitability. CEO Sam Altman continues to plan major investments in additional computing power while pursuing further circular business arrangements.

OpenAI is reportedly working on a funding round of up to 100 billion dollars.

Read full article about: Nvidia wants to create universal AI agents for all worlds with NitroGen

Nvidia has released a new base model for gaming agents. NitroGen is an open vision action model trained on 40,000 hours of gameplay videos from more than 1,000 games. The researchers tapped into a previously overlooked resource: YouTube and Twitch videos with visible controller overlays. Using template matching and a fine-tuned SegFormer model, they extracted player inputs directly from these recordings.

NitroGen builds on Nvidia's GR00T N1.5 robotics model. According to the researchers, it's the first model to demonstrate that robotics foundation models can work as universal agents across virtual environments with different physics engines and visual styles. The model handles various genres—action RPGs, platformers, roguelikes, and more. When dropped into unfamiliar games, it achieves up to 52 percent better success rates than models trained from scratch.

The team, which includes researchers from Nvidia, Stanford, Caltech, and other universities, has made the dataset, model weights, paper, and code publicly available.

Ad
Read full article about: Alibaba's Qwen releases AI model that splits images into editable layers like Photoshop

Alibaba's AI unit Qwen has released a new image editing model that breaks down photos into separate, editable components. Qwen-Image-Layered splits images into multiple individual layers with transparent backgrounds (RGBA layers), letting users edit each layer independently without affecting the rest of the image.

The model handles straightforward edits like resizing, repositioning, and recoloring individual elements. Users can swap out backgrounds, replace people, modify text, or delete, move, and enlarge objects. Images can be split into either 3 or 8 layers, and the process is repeatable - each layer can be broken down into additional layers as needed. The Qwen team describes this approach as a bridge between standard images and structured, editable representations.

The Qwen team has published the code on GitHub, with models available on Hugging Face and ModelScope. More details are available in the blog post and technical report. For hands-on testing, demos are available on Hugging Face and ModelScope.

Comment Source: Blog
Read full article about: Anthropic's Claude Opus 4.5 can tackle some tasks lasting nearly five hours

AI research organization METR has released new benchmark results for Claude Opus 4.5. Anthropic's latest model achieved a 50 percent time horizon of roughly 4 hours and 49 minutes—the highest score ever recorded. The time horizon measures how long a task can be while still being solved by an AI model at a given success rate (in this case, 50 percent).

METR

The gap between difficulty levels is big. At the 80 percent success rate, the time horizon drops to just 27 minutes, about the same as older models, so Opus 4.5 mainly shines on longer tasks. The theoretical upper limit of over 20 hours is likely noise from limited test data, METR says.

Like any benchmark, the METR test has its limits, most notably, it only covers 14 samples. A detailed breakdown by Shashwat Goel of the weaknesses is here.

Comment Source: METR
Ad

Some notes on what's new

Hey guys, you’ve probably noticed we’ve changed a thing or two about this website. If you don’t like it yet, give it a chance, it might grow on you.

Two things are important: First, a stronger focus to let you just scroll through the main feed and still grasp the most relevant information. Second, with that comes a return to more blog-style publishing overall. We’ve also added a system we call “Context on Demand”.