LLaVA-UHD-v2 / README.md
YipengZhang's picture
Update README.md
409a968 verified
|
raw
history blame
1.34 kB
metadata
inference: false
pipeline_tag: image-text-to-text

LLaVA-UHD v2 Model Card

Model details

Model type: LLaVA-UHD v2, an advanced MLLM centered around a Hierarchical window transformer that enables capturing diverse visual granularity by constructing and integrating a high resolution feature pyramid.

Model date: LLaVA-UHD v2 was trained in November 2024.

Paper or resources for more information: https://github.com/thunlp/LLaVA-UHD

License

LLaVA-UHD v2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Where to send questions or comments about the model: https://github.com/thunlp/LLaVA-UHD/issues

Intended use

Primary intended uses: The primary use of LLaVA-UHD v2 is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset