Ad
Skip to content

Maximilian Schreiner

Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Read full article about: Aviation startup Boom pivots to gas turbines to feed AI’s power hunger

US aviation startup Boom Supersonic, typically focused on developing a supersonic passenger jet, is making a surprise entry into the energy business to capitalize on the AI boom.

Founder Blake Scholl unveiled "Superpower," a 42-megawatt gas turbine designed specifically to handle the massive energy loads of AI data centers. With the US power grid struggling to meet demand, companies are increasingly turning to independent power sources like these turbines to keep their facilities running.

The system uses the core of the "Symphony" engine, which was originally built for the company's planned Overture jet. Scholl notes that unlike older models, the turbine can maintain full power in high heat without requiring water cooling.

Crusoe has signed on as the launch customer, and Boom has secured $300 million to begin production. The company plans to use the revenue from the turbine business to help fund the development of its aircraft.

Read full article about: Transformer co-creator Vaswani unveils high-performance Rnj-1 coding model

Essential AI's new open-source model, Rnj-1, outperforms significantly larger competitors on the "SWE-bench Verified" test. This benchmark is considered particularly challenging because it evaluates an AI's ability to independently solve real-world programming problems. Despite being a compact model with just eight billion parameters, Rnj-1 scores 20.8 points.

By comparison, similarly sized models like Qwen 3 (without reasoning, 8B) only reach 4.5 points in Essential AI's testing. The system was introduced by Ashish Vaswani, co-founder of Essential AI and co-author of the famous "Attention Is All You Need" paper that launched the Transformer architecture. Rnj-1 is also Transformer-based, specifically utilizing the Gemma 3 architecture. According to the company, development focused primarily on better pre-training rather than post-training methods like reinforcement learning. These improvements also result in lower pre-training computational costs, thanks to the use of the Muon optimizer.