OpenAI is piloting Aardvark, a security tool built on GPT-5 that scans software code for vulnerabilities. The system is designed to work like a security analyst: it reviews code repositories, flags potential risks, tests whether vulnerabilities can be exploited in a sandbox, and suggests fixes.

Ad

In internal tests, OpenAI says Aardvark found 92 percent of known and intentionally added vulnerabilities. The tool has also been used on open source projects, where it identified several issues that later received CVE (Common Vulnerabilities and Exposures) numbers.

Aardvark's workflow: GPT-5 scans code, tests for vulnerabilities, and suggests fixes. | Image: OpenAI

Aardvark is already in use on some internal systems and with selected partners. For now, it's available only in a closed beta, and developers can apply here. Anthropic offers a similar open source tool for its Claude model.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.