{"paper_url": "https://huggingface.co/papers/2309.11499", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Unified Language-Vision Pretraining with Dynamic Discrete Visual Tokenization](https://huggingface.co/papers/2309.04669) (2023)\n* [Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning](https://huggingface.co/papers/2309.02591) (2023)\n* [MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning](https://huggingface.co/papers/2309.07915) (2023)\n* [StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized Image-Dialogue Data](https://huggingface.co/papers/2308.10253) (2023)\n* [Empowering Vision-Language Models to Follow Interleaved Vision-Language Instructions](https://huggingface.co/papers/2308.04152) (2023)\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space"} |