File size: 9,446 Bytes
1d4f55a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
import streamlit as st
from streamlit_extras.switch_page_button import switch_page


translations = {
'en': {'title': 'Grounding DINO',
    'original_tweet': 
       """
       [Original tweet](https://twitter.com/mervenoyann/status/1780558859221733563) (April 17, 2024)
       """,
    'tweet_1':
        """
        We have merged Grounding DINO in 🤗 Transformers 🦖  
        It's an amazing zero-shot object detection model, here's why 🧶 
        """,
    'tweet_2':
        """
        There are two zero-shot object detection models as of now, one is OWL series by Google Brain and the other one is Grounding DINO 🦕  
        Grounding DINO pays immense attention to detail ⬇️  
        Also [try yourself](https://t.co/UI0CMxphE7).
        """,
    'tweet_3':
        """
        I have also built another [application](https://t.co/4EHpOwEpm0) for GroundingSAM, combining GroundingDINO and Segment Anything by Meta for cutting edge zero-shot image segmentation.
        """,
    'tweet_4':
        """
        Grounding DINO is essentially a model with connected image encoder (Swin transformer), text encoder (BERT) and on top of both, a decoder that outputs bounding boxes 🦖  
        This is quite similar to <a href='OWLv2' target='_self'>OWL series</a>, which uses a ViT-based detector on CLIP. 
        """,
    'tweet_5':
        """
        The authors train Swin-L/T with BERT contrastively (not like CLIP where they match the images to texts by means of similarity) where they try to approximate the region outputs to language phrases at the head outputs 🤩 
        """,
    'tweet_6':
        """
        The authors also form the text features on the sub-sentence level. 
        This means it extracts certain noun phrases from training data to remove the influence between words while removing fine-grained information. 
        """,
    'tweet_7':
        """
        Thanks to all of this, Grounding DINO has great performance on various REC/object detection benchmarks 🏆📈 
        """,
    'tweet_8':
        """
        Thanks to 🤗 Transformers, you can use Grounding DINO very easily!  
        You can also check out [NielsRogge](https://twitter.com/NielsRogge)'s [notebook here](https://t.co/8ADGFdVkta).
        """,
    'ressources':
        """
        Ressources:   
        [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) 
        by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang (2023)  
        [GitHub](https://github.com/IDEA-Research/GroundingDINO)  
        [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/grounding-dino)
        """
      },
'fr': {
    'title': 'Grounding DINO',
    'original_tweet': 
       """
        [Tweet de base](https://twitter.com/mervenoyann/status/1780558859221733563) (en anglais) (17 avril 2024)
       """,
    'tweet_1':
        """
        Nous avons ajouté Grounding DINO à 🤗 Transformers 🦖  
        C'est un modèle incroyable de détection d'objets en zéro-shot, voici pourquoi 🧶 
        """,
    'tweet_2':
        """
        Il existe actuellement deux modèles de détection d'objets en zero-shot, l'un est la série OWL de Google Brain et l'autre est Grounding DINO 🦕.  
        Grounding DINO accorde une grande attention aux détails ⬇️  
        [Essayez le vous-même](https://t.co/UI0CMxphE7).
        """,
    'tweet_3':
        """
        J'ai également créé une autre [application](https://t.co/4EHpOwEpm0) pour GroundingSAM, combinant GroundingDINO et Segment Anything de Meta pour une segmentation d'image en zéro-shot.        
        """,
    'tweet_4':
        """ 
        Grounding DINO est essentiellement un modèle avec un encodeur d'image (Swin transformer), un encodeur de texte (BERT) et, au-dessus des deux, un décodeur qui produit des boîtes de délimitation 🦖.  
        Cela ressemble beaucoup à <a href='OWLv2' target='_self'>OWL</a>, qui utilise un détecteur ViT basé sur CLIP. 
        """,
    'tweet_5':
        """
        Les auteurs entraînent Swin-L/T avec BERT de manière contrastive (pas comme CLIP où ils font correspondre les images aux textes au moyen de la similarité) où ils essaient de faire une approximation entre la région sortie et la phrases sortie 🤩
        """,
    'tweet_6':
        """
        Les auteurs forment les caractéristiques textuelles au niveau de la sous-phrase. 
        Cela signifie qu'ils extraient certaines phrases des données d'apprentissage afin de supprimer l'influence entre les mots tout en supprimant les informations plus fines.         """,
    'tweet_7':
        """
        Grâce à tout cela, Grounding DINO a d'excellentes performances sur divers benchmarks de détection de REC/objets 🏆📈. 
        """,
    'tweet_8':
        """
        Grâce à 🤗 Transformers, vous pouvez utiliser Grounding DINO très facilement !  
        Vous pouvez également consulter le [ notebook](https://t.co/8ADGFdVkta) de [NielsRogge](https://twitter.com/NielsRogge).
        """,
    'ressources':
        """
        Ressources :   
        [Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection](https://arxiv.org/abs/2303.05499) 
        de Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang (2023)  
        [GitHub](https://github.com/IDEA-Research/GroundingDINO)  
        [Documentation d'Hugging Face](https://huggingface.co/docs/transformers/model_doc/grounding-dino)
        """
    }
}    


def language_selector():
    languages = {'EN': '🇬🇧', 'FR': '🇫🇷'}
    selected_lang = st.selectbox('', options=list(languages.keys()), format_func=lambda x: languages[x], key='lang_selector')
    return 'en' if selected_lang == 'EN' else 'fr'

left_column, right_column = st.columns([5, 1])

# Add a selector to the right column
with right_column:
    lang = language_selector()

# Add a title to the left column
with left_column:
    st.title(translations[lang]["title"])
    
st.success(translations[lang]["original_tweet"], icon="ℹ️")
st.markdown(""" """)

st.markdown(translations[lang]["tweet_1"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_1.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_2"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_2.jpeg", use_column_width=True)
st.image("pages/Grounding_DINO/image_3.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_3"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_4.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_4"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_5.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_5"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_6.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_6"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_7.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_7"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_8.jpeg", use_column_width=True)
st.markdown(""" """)

st.markdown(translations[lang]["tweet_8"], unsafe_allow_html=True)
st.markdown(""" """)

st.image("pages/Grounding_DINO/image_9.jpeg", use_column_width=True)
st.markdown(""" """)

with st.expander ("Code"):
    st.code("""
    import torch 
    from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection 
    
    model_id = "IDEA-Research/grounding-dino-tiny" 
    
    processor = AutoProcessor.from_pretrained(model_id) 
    model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device) 
    
    inputs = processor(images=image, text=text, return_tensors="pt").to(device) 
    with torch.no_grad(): 
        outputs = model(**inputs) 
    
    results = processor.post_process_grounded_object_detection( 
        outputs, 
        inputs.input_ids, 
        box_threshold=0.4, 
        text_threshold=0.3, 
        target_sizes=[image.size[::-1]])
    """)
st.markdown(""" """)

st.info(translations[lang]["ressources"], icon="📚")  

st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3= st.columns(3)
with col1:
    if lang == "en":
        if st.button('Previous paper', use_container_width=True):
            switch_page("SegGPT")
    else:
        if st.button('Papier précédent', use_container_width=True):
            switch_page("SegGPT")
with col2:
    if lang == "en":
        if st.button("Home", use_container_width=True):
            switch_page("Home")
    else:
        if st.button("Accueil", use_container_width=True):
            switch_page("Home")
with col3:
    if lang == "en":
        if st.button("Next paper", use_container_width=True):
            switch_page("DocOwl 1.5")
    else:
        if st.button("Papier suivant", use_container_width=True):
            switch_page("DocOwl 1.5")