metadata
license: apache-2.0
pipeline_tag: text-generation
language:
- en
- zh
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
Insight-V-Summary-LLaMA3
Model Summary
The Insight-V models are 7B parameter models based on LLaMA3-8B language model with a context window of 32K tokens.
Insight-V offers 1) a scalable data generation pipeline for long-chain, high-quality reasoning data, 2) a multi-agent system that decomposes visual reasoning tasks into reasoning and summarization, and 3) a two-stage training pipeline to enhance visual reasoning capabilities. Together, these contributions address key challenges in visual reasoning, providing a solid foundation for future research in MLLM reasoning.
- Repository: https://github.com/dongyh20/Insight-V
- Languages: English, Chinese
- Paper: https://arxiv.org/abs/2411.14432
Model Architecture
- Architecture: Pre-trained Oryx-ViT + LLaMA3-8B
- Data: a mixture of 1.2M image-text data
- Precision: BFloat16
Hardware & Software
- Hardware: 64 * NVIDIA Tesla A100
- Orchestration: HuggingFace Trainer
- Code: Pytorch