LLaVA-UHD v2 Model Card
Model details
Model type: LLaVA-UHD v2, an advanced MLLM centered around a Hierarchical window transformer that enables capturing diverse visual granularity by constructing and integrating a high resolution feature pyramid.
Model date: LLaVA-UHD v2 was trained in November 2024.
Paper or resources for more information: https://github.com/thunlp/LLaVA-UHD
License
LLaVA-UHD v2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/thunlp/LLaVA-UHD/issues
Intended use
Primary intended uses: The primary use of LLaVA-UHD v2 is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
Training dataset
- JBU Pretrain: MS-COCO stuff 2017
- Pretrain: LLaVA-Pretrain 558K (filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.)
- SFT: 858k-mixed dataset in https://huggingface.co/datasets/YipengZhang/LLaVA-UHD-v2-SFT-Data
Citation
If you find LLaVA-UHD v2 useful for your research and applications, please cite using this BibTeX:
@article{zhang2024llavauhdv2,
title={LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer},
author={Yipeng Zhang and Yifan Liu and Zonghao Guo and Yidan Zhang and Xuesong Yang and Chi Chen and Jun Song and Bo Zheng and Yuan Yao and Zhiyuan Liu and Tat-Seng Chua and Maosong Sun},
journal={arXiv preprint arXiv:2412.13871},
year={2024}
}
- Downloads last month
- 19