Ad
Short

Rumors on LinkedIn claim that ChatGPT is no longer allowed to give medical or legal advice, but OpenAI says that’s false. The company says the model’s behavior has not changed. Karan Singhal, OpenAI’s Head of Medical AI, says ChatGPT was never meant to replace expert advice, but can still help users understand complex medical or legal topics.

Screenshot via X

OpenAI’s usage policy change logs show no recent changes to how sensitive topics are handled. The most recent update on October 29, 2025, was made to "reflect a universal set of policies across OpenAI products and services."

via waybackmachine

OpenAI’s usage policy change logs show no recent changes to how sensitive content is handled. The latest update on October 29, 2025, was made to unify the rules across all products. A line warning about giving advice that “requires a license” was already in earlier versions. Older policies included similar notes, just without the licensing reference.

Short

OpenAI has signed a $38 billion multi-year deal with Amazon Web Services (AWS) to run and expand its AI models using AWS infrastructure. The partnership includes access to AWS UltraServers powered by hundreds of thousands of NVIDIA GPUs and scalable CPUs. The agreement runs through at least 2026, with extension options. OpenAI's flagship models, such as GPT-5, will remain exclusive to Microsoft Azure and OpenAI's own platform, except for its open-source models.

via X

The AWS deal adds to a string of recent partnerships by OpenAI: with Nvidia and Broadcom for at least 10 gigawatts of compute each, AMD for up to 6 gigawatts, and Oracle for 4.5 gigawatts.

Short

Udio, an AI music startup, recently reached a settlement with Universal Music Group. While the agreement ends an ongoing copyright lawsuit, it also brought sweeping new restrictions that have angered many users. Songs generated with Udio can no longer be downloaded, streamed, or used in personal projects.

On platforms like Reddit and Discord, frustrated users have voiced their anger and announced plans to leave Udio altogether. During an online meeting, Udio CEO Andrew Sanchez offered free credits as compensation but stopped short of promising any policy changes. Looking ahead, Udio and Universal plan to launch a paid music service next year that will feature fully licensed material.

Ad
Ad
Short

OpenAI is piloting Aardvark, a security tool built on GPT-5 that scans software code for vulnerabilities. The system is designed to work like a security analyst: it reviews code repositories, flags potential risks, tests whether vulnerabilities can be exploited in a sandbox, and suggests fixes.

In internal tests, OpenAI says Aardvark found 92 percent of known and intentionally added vulnerabilities. The tool has also been used on open source projects, where it identified several issues that later received CVE (Common Vulnerabilities and Exposures) numbers.

Aardvark's workflow: GPT-5 scans code, tests for vulnerabilities, and suggests fixes. | Image: OpenAI

Aardvark is already in use on some internal systems and with selected partners. For now, it's available only in a closed beta, and developers can apply here. Anthropic offers a similar open source tool for its Claude model.

Ad
Ad
Short

OpenAI has launched gpt-oss-safeguard, a new set of open source models built for flexible security classification. The models come in two sizes, 120b and 20b, and are available under the Apache 2.0 license for anyone to use and modify. Unlike traditional classifiers that need to be retrained whenever safety rules change, these models can interpret policies in real time, according to OpenAI. This lets organizations update their rules instantly, without retraining the model.

The models are designed to be more transparent as well. Developers can see exactly how the models make decisions, making it easier to understand and audit how security is enforced. gpt-oss-safeguard is based on OpenAI's gpt-oss open source model and is part of a larger collaboration with ROOST, an open source platform focused on building tools and infrastructure for AI safety, security, and governance.

Short

Bill Gates recently compared the current wave of excitement around AI to the dot-com bubble, while making it clear this isn't just hype. In a CNBC interview, Gates said companies are pouring huge sums into chips and data centers, even though most haven't turned a profit from AI yet. He expects some of these bets will end up as costly failures. Still, Gates calls AI "the biggest technical thing ever in my lifetime," describing its economic potential as enormous. At the same time, he cautions that the surge in new data centers could drive up electricity costs.

Gates isn't alone in his concerns. Other industry leaders, including OpenAI CEO Sam Altman and AI researchers like Stuart Russell and Yann LeCun, have recently warned that the current AI boom could end with a crash if expectations get too far ahead of real progress.

Ad
Ad
Google News