Ad
Short

OpenAI is piloting Aardvark, a security tool built on GPT-5 that scans software code for vulnerabilities. The system is designed to work like a security analyst: it reviews code repositories, flags potential risks, tests whether vulnerabilities can be exploited in a sandbox, and suggests fixes.

In internal tests, OpenAI says Aardvark found 92 percent of known and intentionally added vulnerabilities. The tool has also been used on open source projects, where it identified several issues that later received CVE (Common Vulnerabilities and Exposures) numbers.

Aardvark's workflow: GPT-5 scans code, tests for vulnerabilities, and suggests fixes. | Image: OpenAI

Aardvark is already in use on some internal systems and with selected partners. For now, it's available only in a closed beta, and developers can apply here. Anthropic offers a similar open source tool for its Claude model.

Ad
Ad
Short

Microsoft and OpenAI have decided they'll define for themselves what counts as artificial general intelligence (AGI) and when they've supposedly achieved it. The two companies say they'll appoint a panel of experts to make that call, but they haven't said who will be on it, what criteria they'll use, or even what AGI actually means. In a joint podcast, Sam Altman and Satya Nadella made it clear there's no shared definition or timeline, not even between them.

AGI was once considered a scientific milestone: an AI system that can think, learn, and solve problems like a human. Now, it's become a bargaining chip in a contract between two tech giants. That shift turns AGI into a label that can be applied or withdrawn whenever it's convenient, stripping it of any real, objective meaning. Maybe that was the goal all along—hype up the AGI label, then let it fade away when it no longer serves their interests.

Ad
Ad
Short

Google has launched a new ad for its AI search, made entirely with its AI video tool Veo 3, but without disclosing the use of AI. The spot airs on TV from today and expands to cinemas and online media on Saturday. To avoid criticism of fake-looking people, the video uses stylized, toy-like characters.

Robert Wong from Google Creative Lab said most viewers don’t care if AI was involved. Google treats AI like any other creative tool, such as Photoshop. A Christmas version is already planned.

Short

OpenAI has launched gpt-oss-safeguard, a new set of open source models built for flexible security classification. The models come in two sizes, 120b and 20b, and are available under the Apache 2.0 license for anyone to use and modify. Unlike traditional classifiers that need to be retrained whenever safety rules change, these models can interpret policies in real time, according to OpenAI. This lets organizations update their rules instantly, without retraining the model.

The models are designed to be more transparent as well. Developers can see exactly how the models make decisions, making it easier to understand and audit how security is enforced. gpt-oss-safeguard is based on OpenAI's gpt-oss open source model and is part of a larger collaboration with ROOST, an open source platform focused on building tools and infrastructure for AI safety, security, and governance.

Short

Bill Gates recently compared the current wave of excitement around AI to the dot-com bubble, while making it clear this isn't just hype. In a CNBC interview, Gates said companies are pouring huge sums into chips and data centers, even though most haven't turned a profit from AI yet. He expects some of these bets will end up as costly failures. Still, Gates calls AI "the biggest technical thing ever in my lifetime," describing its economic potential as enormous. At the same time, he cautions that the surge in new data centers could drive up electricity costs.

Gates isn't alone in his concerns. Other industry leaders, including OpenAI CEO Sam Altman and AI researchers like Stuart Russell and Yann LeCun, have recently warned that the current AI boom could end with a crash if expectations get too far ahead of real progress.

Ad
Ad
Google News