Ad
Ad
Ad
Ad
Ad
Short

According to Character.ai CEO Karandeep Anand, users spend an average of 80 minutes a day chatting with AI-generated fictional characters. That puts Character.ai nearly on par with apps like TikTok (95 minutes) and YouTube (84 minutes), and ahead of Instagram (70 minutes). The numbers help explain why Meta CEO Mark Zuckerberg is now putting a bigger emphasis on personalized chatbots across his own platforms.

Character.ai currently has 20 million monthly active users. Half are women, and most are Gen Z or even younger. Critics warn that these kinds of apps can create emotional dependencies among young people and have called for them to be banned for minors. In the US, several lawsuits are underway over alleged harm to children, including one involving a suicide. Character.ai has responded by offering a separate model for users under 18 and now warns against excessive use.

Short

Meta has released DINOv3, a new AI model for universal image processing that doesn't require labeled data. Trained with self-supervised learning on 1.7 billion images and built with 7 billion parameters, DINOv3 handles a wide range of image tasks and domains with little or no adaptation. This makes it especially useful in fields with limited annotated data, such as satellite imagery. Meta says the model performs well on challenging benchmarks that previously needed specialized systems.

Video: Meta

According to Meta's benchmarks, v3 outperforms v2, though the improvement is less pronounced than the jump from v1 to v2. Meta has released the pre-trained models in several variants, along with adapters and the training and evaluation code under the DINOv3 license, which allows for commercial use, all available on GitHub.

Ad
Ad
Short

Anthropic's Claude Opus 4 and 4.1 models can now end conversations if users repeatedly try to get them to generate harmful or abusive content. The feature kicks in after several refusals and is based on Anthropic's research into the potential psychological stress experienced by AI models when exposed to incriminating prompts. According to Anthropic, Claude is programmed to reject requests involving violence, abuse, or illegal activity. I gave it a shot, but the model just kept chatting and refused to hang up.

Image: Screenshot THE DECODER

Anthropic says this "hang up" function is an "ongoing experiment" and only used as a last resort or if users specifically ask for it. Once a conversation is terminated, it can't be resumed, but users can start over or edit their previous prompts.

Google News