As for doing it in general, it's a fairly standard vision transformer so anything built on DINOv2 (or any other ViT) should be easy to adapt to v3.
fnands 5 hours ago [-]
As someone who works on satellite imagery, this part is incredibly exciting:
> ViT models pretrained on satellite dataset (SAT-493M)
DINOv2 had pretty poor out-of-the-box performance on satellite/aerial imagery, so it's super exciting that they released a version of it specifically for this use case.
Imnimo 12 hours ago [-]
I think SAM and DINO are the two off-the-shelf image models I've gotten the most mileage out of.
llm_nerd 13 hours ago [-]
You have to share your contact information, including DoB, and then be approved access, to obtain the models, and given that it's Meta I assume they're actually validating it against their All Humans database.
They made their own DINOv3 license for this release (whereas DINOv2 used the Apache 2.0 license).
Neat though. Will still check it out.
As a first comment, I had to install the latest transformer==4.56.0dev (e.g. pip install git+https://github.com/huggingface/transformers) for it to work properly. 4.55.2 and earlier was failing with a missing image type in the config.
Qwuke 13 hours ago [-]
Yes, it's pretty disappointing for a seemingly big improvement over SOTA to be commercially licensed compared the previous version.. At least in the press release they're not portraying it as open source just because it's on GitHub/HuggingFace.
tough 13 hours ago [-]
The new Facebook AI Czar wang hinted on previous interviews that Meta might change their stand on licensing/open source.
Seems like the tides are shifting at meta
rajman187 12 hours ago [-]
This has nothing to do with the newly appointed fellow nor Meta Superintelligence Labs, but rather work from FAIR that would have gone through a lengthy review process before seeing the light of day. Not fun to see the license change in any case
I remember DINOv2 was originally a commercial licence. I (along with others) just asked if they could change it on a GitHub issue, and after some time, they did. Might be worth asking
barbolo 14 hours ago [-]
That's awesome. DINOv2 was the best image embedder until now.
ranger_danger 15 hours ago [-]
I have no idea what this even is.
ethan_smith 6 hours ago [-]
DINO (Distillation with No labels) is a self-supervised computer vision framework that learns powerful image representations without requiring labeled data. It's particularly valuable for downstream tasks like object detection and segmentation, with DINOv3 now scaling to over 1B parameters and trained on 1.2B images.
n3storm 15 hours ago [-]
D3NO?
kaoD 15 hours ago [-]
> An extended family of versatile vision foundation models producing high-quality dense features and achieving outstanding performance on various vision tasks including outperforming the specialized state of the art across a broad range of settings, without fine-tuning
kevinventullo 14 hours ago [-]
To elaborate, this is a foundation model. This basically means it can take an arbitrary image and map it to a high dimensional space H in which ~arbitrary characteristics become much easier to solve for.
For example (and this might be oversimplifying a bit, computer vision people please correct me if I’m wrong) if you’re interested in knowing whether or not the image contains a cat, then maybe there is some hyperplane P in H for which images on one side of P do not contain a cat, and images on the other side do contain a cat. And so solving for “Does this image contain a cat?”becomes a much easier problem, all you have to do is figure out what P is. Once you do that, you can pass your image into DINO, dot product with the equation for P, and check whether the answer is negative or positive. The point is that finding P is much easier than training your own computer vision model from scratch.
hgo 5 hours ago [-]
Thanks, I think I understand roughly.
Could it be used for for recognizing people? As in identifying what person is in what image?
reactordev 14 hours ago [-]
If computer vision were semantic search, nailed it. It’s a little more complicated than that but - with this new model, not by much :D
kristopolous 11 hours ago [-]
This is still pretty non-specific, however, luckily, they have a landing page:
I’m fascinated by this, but am admittedly clueless about how to actually go about building any kind of recognizer or other system atop it.
As for doing it in general, it's a fairly standard vision transformer so anything built on DINOv2 (or any other ViT) should be easy to adapt to v3.
> ViT models pretrained on satellite dataset (SAT-493M)
DINOv2 had pretty poor out-of-the-box performance on satellite/aerial imagery, so it's super exciting that they released a version of it specifically for this use case.
They made their own DINOv3 license for this release (whereas DINOv2 used the Apache 2.0 license).
Neat though. Will still check it out.
As a first comment, I had to install the latest transformer==4.56.0dev (e.g. pip install git+https://github.com/huggingface/transformers) for it to work properly. 4.55.2 and earlier was failing with a missing image type in the config.
Seems like the tides are shifting at meta
For example (and this might be oversimplifying a bit, computer vision people please correct me if I’m wrong) if you’re interested in knowing whether or not the image contains a cat, then maybe there is some hyperplane P in H for which images on one side of P do not contain a cat, and images on the other side do contain a cat. And so solving for “Does this image contain a cat?”becomes a much easier problem, all you have to do is figure out what P is. Once you do that, you can pass your image into DINO, dot product with the equation for P, and check whether the answer is negative or positive. The point is that finding P is much easier than training your own computer vision model from scratch.
https://ai.meta.com/dinov3/
DINOV3: Self-supervised learning for vision at unprecedented scale | https://news.ycombinator.com/item?id=44904608