File size: 2,706 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
import streamlit as st
from streamlit_extras.switch_page_button import switch_page

st.title("DenseConnector")

st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1796089181988352216) (May 30, 2024)""", icon="ℹ️")
st.markdown(""" """)

st.markdown("""Do we fully leverage image encoders in vision language models? 👀  
A new paper built a dense connector that does it better! Let's dig in 🧶 
""")
st.markdown(""" """)

st.image("pages/DenseConnector/image_1.jpg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
VLMs consist of an image encoder block, a projection layer that projects image embeddings to text embedding space and then a text decoder sequentially connected 📖  
This [paper](https://t.co/DPQzbj0eWm) explores using intermediate states of image encoder and not a single output 🤩 
""")
st.markdown(""" """)

st.image("pages/DenseConnector/image_2.jpg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
The authors explore three different ways of instantiating dense connector: sparse token integration, sparse channel integration and dense channel integration (each of them just take intermediate outputs and put them together in different ways, see below).  
""")
st.markdown(""" """)

st.image("pages/DenseConnector/image_3.jpg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
They explore all three of them integrated to LLaVA 1.5 and found out each of the new models are superior to the original LLaVA 1.5.  
""")
st.markdown(""" """)

st.image("pages/DenseConnector/image_4.jpg", use_column_width=True)
st.markdown(""" """)

st.markdown("""
I tried the [model](https://huggingface.co/spaces/HuanjinYao/DenseConnector-v1.5-8B) and it seems to work very well 🥹  
The authors have released various [checkpoints](https://t.co/iF8zM2qvDa) based on different decoders (Vicuna 7/13B and Llama 3-8B). 
""")
st.markdown(""" """)

st.image("pages/DenseConnector/image_5.jpg", use_column_width=True)
st.markdown(""" """)

st.info("""
Ressources:  
[Dense Connector for MLLMs](https://arxiv.org/abs/2405.13800) 
by Huanjin Yao, Wenhao Wu, Taojiannan Yang, YuXin Song, Mengxi Zhang, Haocheng Feng, Yifan Sun, Zhiheng Li, Wanli Ouyang, Jingdong Wang (2024)  
[GitHub](https://github.com/HJYao00/DenseConnector)""", icon="📚")

st.markdown(""" """)
st.markdown(""" """)
st.markdown(""" """)
col1, col2, col3 = st.columns(3)
with col1:
    if st.button('Previous paper', use_container_width=True):
        switch_page("CuMo")
with col2:
    if st.button('Home', use_container_width=True):
        switch_page("Home")
with col3:
    if st.button('Next paper', use_container_width=True):
        switch_page("Depth Anything v2")