Meta has released DINOv3, a new AI model for universal image processing that doesn't require labeled data. Trained with self-supervised learning on 1.7 billion images and built with 7 billion parameters, DINOv3 handles a wide range of image tasks and domains with little or no adaptation. This makes it especially useful in fields with limited annotated data, such as satellite imagery. Meta says the model performs well on challenging benchmarks that previously needed specialized systems.

Ad

Video: Meta

According to Meta's benchmarks, v3 outperforms v2, though the improvement is less pronounced than the jump from v1 to v2. Meta has released the pre-trained models in several variants, along with adapters and the training and evaluation code under the DINOv3 license, which allows for commercial use, all available on GitHub.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.