Spaces:
Running
Running
import streamlit as st | |
from streamlit_extras.switch_page_button import switch_page | |
st.title("DenseConnector") | |
st.success("""[Original tweet](https://twitter.com/mervenoyann/status/1796089181988352216) (May 30, 2024)""", icon="βΉοΈ") | |
st.markdown(""" """) | |
st.markdown("""Do we fully leverage image encoders in vision language models? π | |
A new paper built a dense connector that does it better! Let's dig in π§Ά | |
""") | |
st.markdown(""" """) | |
st.image("pages/DenseConnector/image_1.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.markdown(""" | |
VLMs consist of an image encoder block, a projection layer that projects image embeddings to text embedding space and then a text decoder sequentially connected π | |
This [paper](https://t.co/DPQzbj0eWm) explores using intermediate states of image encoder and not a single output π€© | |
""") | |
st.markdown(""" """) | |
st.image("pages/DenseConnector/image_2.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.markdown(""" | |
The authors explore three different ways of instantiating dense connector: sparse token integration, sparse channel integration and dense channel integration (each of them just take intermediate outputs and put them together in different ways, see below). | |
""") | |
st.markdown(""" """) | |
st.image("pages/DenseConnector/image_3.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.markdown(""" | |
They explore all three of them integrated to LLaVA 1.5 and found out each of the new models are superior to the original LLaVA 1.5. | |
""") | |
st.markdown(""" """) | |
st.image("pages/DenseConnector/image_4.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.markdown(""" | |
I tried the [model](https://huggingface.co/spaces/HuanjinYao/DenseConnector-v1.5-8B) and it seems to work very well π₯Ή | |
The authors have released various [checkpoints](https://t.co/iF8zM2qvDa) based on different decoders (Vicuna 7/13B and Llama 3-8B). | |
""") | |
st.markdown(""" """) | |
st.image("pages/DenseConnector/image_5.jpg", use_column_width=True) | |
st.markdown(""" """) | |
st.info(""" | |
Ressources: | |
[Dense Connector for MLLMs](https://arxiv.org/abs/2405.13800) | |
by Huanjin Yao, Wenhao Wu, Taojiannan Yang, YuXin Song, Mengxi Zhang, Haocheng Feng, Yifan Sun, Zhiheng Li, Wanli Ouyang, Jingdong Wang (2024) | |
[GitHub](https://github.com/HJYao00/DenseConnector)""", icon="π") | |
st.markdown(""" """) | |
st.markdown(""" """) | |
st.markdown(""" """) | |
col1, col2, col3 = st.columns(3) | |
with col1: | |
if st.button('Previous paper', use_container_width=True): | |
switch_page("CuMo") | |
with col2: | |
if st.button('Home', use_container_width=True): | |
switch_page("Home") | |
with col3: | |
if st.button('Next paper', use_container_width=True): | |
switch_page("Depth Anything v2") |