Ad
Ad
Ad
Short

OpenAI is piloting Aardvark, a security tool built on GPT-5 that scans software code for vulnerabilities. The system is designed to work like a security analyst: it reviews code repositories, flags potential risks, tests whether vulnerabilities can be exploited in a sandbox, and suggests fixes.

In internal tests, OpenAI says Aardvark found 92 percent of known and intentionally added vulnerabilities. The tool has also been used on open source projects, where it identified several issues that later received CVE (Common Vulnerabilities and Exposures) numbers.

Aardvark's workflow: GPT-5 scans code, tests for vulnerabilities, and suggests fixes. | Image: OpenAI

Aardvark is already in use on some internal systems and with selected partners. For now, it's available only in a closed beta, and developers can apply here. Anthropic offers a similar open source tool for its Claude model.

Ad
Ad
Ad
Ad
Short

Microsoft and OpenAI have decided they'll define for themselves what counts as artificial general intelligence (AGI) and when they've supposedly achieved it. The two companies say they'll appoint a panel of experts to make that call, but they haven't said who will be on it, what criteria they'll use, or even what AGI actually means. In a joint podcast, Sam Altman and Satya Nadella made it clear there's no shared definition or timeline, not even between them.

AGI was once considered a scientific milestone: an AI system that can think, learn, and solve problems like a human. Now, it's become a bargaining chip in a contract between two tech giants. That shift turns AGI into a label that can be applied or withdrawn whenever it's convenient, stripping it of any real, objective meaning. Maybe that was the goal all along—hype up the AGI label, then let it fade away when it no longer serves their interests.

Google News