inference: false
pipeline_tag: image-text-to-text
LLaVA-UHD v2 Model Card
Model details
Model type: LLaVA-UHD v2, an advanced MLLM centered around a Hierarchical window transformer that enables capturing diverse visual granularity by constructing and integrating a high resolution feature pyramid.
Model date: LLaVA-UHD v2 was trained in November 2024.
Paper or resources for more information: https://github.com/thunlp/LLaVA-UHD
License
Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
Where to send questions or comments about the model: https://github.com/thunlp/LLaVA-UHD/issues
Intended use
Primary intended uses: The primary use of LLaVA-UHD v2 is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- JBU Pretrain: MS-COCO stuff 2017
- Pretrain: LLaVA-Pretrain 558K (filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.)
- SFT: 858k-mixed dataset in https://huggingface.co/datasets/YipengZhang/LLaVA-UHD-v2-SFT-Data