Ad
Short

Meta has released DINOv3, a new AI model for universal image processing that doesn't require labeled data. Trained with self-supervised learning on 1.7 billion images and built with 7 billion parameters, DINOv3 handles a wide range of image tasks and domains with little or no adaptation. This makes it especially useful in fields with limited annotated data, such as satellite imagery. Meta says the model performs well on challenging benchmarks that previously needed specialized systems.

Video: Meta

According to Meta's benchmarks, v3 outperforms v2, though the improvement is less pronounced than the jump from v1 to v2. Meta has released the pre-trained models in several variants, along with adapters and the training and evaluation code under the DINOv3 license, which allows for commercial use, all available on GitHub.

Short

Anthropic's Claude Opus 4 and 4.1 models can now end conversations if users repeatedly try to get them to generate harmful or abusive content. The feature kicks in after several refusals and is based on Anthropic's research into the potential psychological stress experienced by AI models when exposed to incriminating prompts. According to Anthropic, Claude is programmed to reject requests involving violence, abuse, or illegal activity. I gave it a shot, but the model just kept chatting and refused to hang up.

Image: Screenshot THE DECODER

Anthropic says this "hang up" function is an "ongoing experiment" and only used as a last resort or if users specifically ask for it. Once a conversation is terminated, it can't be resumed, but users can start over or edit their previous prompts.

Ad
Ad
Short

OpenAI has updated GPT-5 to sound less formal and more personal after users said the model felt too cold. The model will now use phrases like "good question" or "great start" more often, OpenAI said. Internal tests show no increase in flattery, which had been a problem with GPT-4o. The new tone is being rolled out globally within one day.

CEO Sam Altman also said on X that ChatGPT users will soon be able to adjust the AI's style to suit their preferences. More updates are planned.

Ad
Ad
Ad
Ad
Short

OpenAI is working on AI systems that can tackle problems for hours or even days at a time. In the company's official podcast, Chief Scientist Jakub Pachocki and researcher Szymon Sidor share inside stories about building these long-term thinking models, which are designed to plan, reason, and experiment over extended periods. OpenAI's math and code models, which recently won Olympic gold in their fields, offer an early glimpse of this approach.

The goal is to automate parts of the research process, such as AI-driven discovery of new ideas in medicine or AI safety. According to the researchers, making this possible will require significantly more computing power than most users have today, which explains Sam Altman's willingness to invest "trillions of dollars" in data centers over the coming years.

Google News