Ad
Skip to content
Read full article about: Aviation startup Boom pivots to gas turbines to feed AI’s power hunger

US aviation startup Boom Supersonic, typically focused on developing a supersonic passenger jet, is making a surprise entry into the energy business to capitalize on the AI boom.

Founder Blake Scholl unveiled "Superpower," a 42-megawatt gas turbine designed specifically to handle the massive energy loads of AI data centers. With the US power grid struggling to meet demand, companies are increasingly turning to independent power sources like these turbines to keep their facilities running.

The system uses the core of the "Symphony" engine, which was originally built for the company's planned Overture jet. Scholl notes that unlike older models, the turbine can maintain full power in high heat without requiring water cooling.

Crusoe has signed on as the launch customer, and Boom has secured $300 million to begin production. The company plans to use the revenue from the turbine business to help fund the development of its aircraft.

Read full article about: AI agents in GitHub and GitLab workflows create new enterprise security risks

Aikido Security warns that plugging AI agents into GitHub and GitLab workflows opens up a serious vulnerability in enterprise environments. The issue hits widely used tools like Gemini CLI, Claude Code, OpenAI Codex, and GitHub AI Inference.

According to the security firm, attackers can slip hidden instructions into issues, pull requests, or commits. That text then flows straight into model prompts, where the AI interprets it as a command instead of harmless content. Because these agents often have permission to run shell commands or modify repos, a single prompt injection can leak secrets or alter workflows. Aikido says tests showed this risk affected at least five Fortune 500 companies.

Aikido

Google patched the issue in its Gemini CLI repo within four days, according to the report. To help organizations secure their pipelines, Aikido published open search rules and recommends limiting the tools available to AI agents, validating all inputs, and avoiding the direct execution of AI outputs.