Ad
Skip to content
Read full article about: AI offensive cyber capabilities are doubling every six months, safety researchers find

AI safety research firm Lyptus Research has published a new study on the offensive cybersecurity capabilities of AI models. The study is based on the METR time-horizon method and involved testing with ten professional security experts.

According to the findings, AI's offensive cyber capability has been doubling every 9.8 months since 2019, and since 2024, that pace has accelerated to every 5.7 months. Opus 4.6 and GPT-5.3 Codex can now solve tasks at a 50 percent success rate with a two-million-token budget that would take human experts roughly three hours to complete.

Chart showing the rise in offensive cyber capability of AI models between 2019 and 2026, measured by time horizon in human-equivalent task time. Two trend lines illustrate the doubling times of 9.8 and 5.7 months.
Offensive cyber capability of AI models over time: From GPT-2 (2019) to Opus 4.6 and GPT-5.3 Codex (2026), the time horizon grew from 30 seconds to roughly three hours. The doubling time accelerated from 9.8 months (since 2019) to 5.7 months (since 2024). | Image: Lyptus Research

Performance jumps significantly with higher token budgets: GPT-5.3 Codex goes from a 3.1-hour to a 10.5-hour time horizon when given ten million tokens instead of two million. The researchers say this suggests they're still underestimating the actual rate of progress. Open-source models trail their closed-source counterparts by about 5.7 months.

The study drew on 291 tasks in total. All data is available on GitHub and Hugging Face, with the full report available here.

Alibaba's Qwen team makes AI models think deeper with new algorithm

Reinforcement learning hits a wall with reasoning models because every token gets the same reward. A new algorithm from Alibaba’s Qwen team fixes this by weighting each step based on how much it shapes what comes next, doubling the length of thought processes in the process.

Google Deepmind study exposes six "traps" that can easily hijack autonomous AI agents in the wild

AI agents are expected to browse the web on their own, handle emails, and carry out transactions. But the very environment they operate in can be weaponized against them. Researchers at Google Deepmind have put together the first systematic catalog of how websites, documents, and APIs can be used to manipulate, deceive, and hijack autonomous agents, and they’ve identified six main categories of attack.