Spaces:
Runtime error
Runtime error
File size: 4,472 Bytes
94e735e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
import streamlit as st
from streamlit_extras.switch_page_button import switch_page
st.title("DINOv2")
st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1743290724672495827) (January 5, 2024)""", icon="ℹ️")
st.markdown(""" """)
st.markdown("""DINOv2 is the king for self-supervised learning in images 🦖🦕
But how does it work? I've tried to explain how it works but let's expand on it 🧶
""")
st.markdown(""" """)
st.image("pages/DINOv2/image_1.jpeg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
DINOv2 is essentially DINO on steroids, so let's talk about DINOv1 first 🦕
It's essentially a pre-training technique to train ViTs with self-supervision, that uses an unusual way of distillation 🧟♂️👨🏻🏫.
Distillation is a technique where there's a large pre-trained model (teacher), and you have a smaller model (student) initialized randomly.
Then during training the student, you take both models'outputs, calculate divergence between them and then update the loss accordingly.
In this case, we have no labels! And the teacher is not pretrained!!!! 🤯
Well, the outputs here are the distributions, and teacher is iteratively updated according to student, which is called exponential moving average.
""")
st.markdown(""" """)
st.image("pages/DINOv2/image_2.jpg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
DINO doesn't use any contrastive loss or clustering but only cross entropy loss (again, what a paper) which leads the model to collapse.
This can be avoided by normalizing the teacher output multiple times, but authors center (to squish logits) and sharpen (through temperature) the teacher outputs.
Finally, local and global crops are given to student and only global crops are given to teacher and this sort of pushes student to identify context from small parts of the image.
""")
st.markdown(""" """)
st.image("pages/DINOv2/image_3.jpeg", use_column_width=True)
st.markdown(""" """)
st.markdown("""How does DINOv2 improve DINO?
⚡️ More efficient thanks to FSDP and Flash Attention
🦖 Has a very efficient data augmentation technique that apparently scales to 100M+ images (put below)
👨🏻🏫 Uses ViT-g instead of training from scratch
""")
st.markdown(""" """)
st.image("pages/DINOv2/image_4.jpeg", use_column_width=True)
st.markdown(""" """)
st.markdown("""
The model is so powerful that you can use DINOv2 even with knn or linear classifiers without need to fine-tuning!
But if you'd like DINOv2 to work even better, [NielsRogge](https://twitter.com/NielsRogge) has built a [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DINOv2/Fine\_tune\_DINOv2\_for\_image\_classification\_%5Bminimal%5D.ipynb) to fine-tune it using Trainer 📖
He also has a [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DINOv2/Train\_a\_linear\_classifier\_on\_top\_of\_DINOv2\_for\_semantic\_segmentation.ipynb) if you feel like training a linear classifier only 📔
All the different DINO/v2 model checkpoints are [here](https://huggingface.co/models?search=dinoLastly).
Lastly, special thanks to [ykilcher](https://twitter.com/ykilcher) as I couldn't make sense of certain things in the paper and watched his awesome [tutorial](https://youtube.com/watch?v=h3ij3F) 🤩
""")
st.markdown(""" """)
st.info("""
Ressources:
[DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193)
by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski (2023)
[GitHub](https://github.com/facebookresearch/dinov2)
[Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/dinov2)""", icon="📚")
st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3 = st.columns(3)
with col1:
if st.button('Previous paper', use_container_width=True):
switch_page("VITMAE")
with col2:
if st.button('Home', use_container_width=True):
switch_page("Home")
with col3:
if st.button('Next paper', use_container_width=True):
switch_page("SigLIP") |