Do we fully leverage image encoders in vision language models? ๐ A new paper built a dense connector that does it better! Let's dig in ๐งถ  VLMs consist of an image encoder block, a projection layer that projects image embeddings to text embedding space and then a text decoder sequentially connected ๐ This [paper](https://t.co/DPQzbj0eWm) explores using intermediate states of image encoder and not a single output ๐คฉ  The authors explore three different ways of instantiating dense connector: sparse token integration, sparse channel integration and dense channel integration (each of them just take intermediate outputs and put them together in different ways, see below).  They explore all three of them integrated to LLaVA 1.5 and found out each of the new models are superior to the original LLaVA 1.5.  I tried the model and it seems to work very well ๐ฅน The authors have released various [checkpoints](https://t.co/iF8zM2qvDa) based on different decoders (Vicuna 7/13B and Llama 3-8B).  > [!TIP] Ressources: [Dense Connector for MLLMs](https://arxiv.org/abs/2405.13800) by Huanjin Yao, Wenhao Wu, Taojiannan Yang, YuXin Song, Mengxi Zhang, Haocheng Feng, Yifan Sun, Zhiheng Li, Wanli Ouyang, Jingdong Wang (2024) [GitHub](https://github.com/HJYao00/DenseConnector) > [!NOTE] [Original tweet](https://twitter.com/mervenoyann/status/1796089181988352216) (May 30, 2024)