Edit model card

Insight-V-Summary-LLaMA3

Model Summary

The Insight-V models are 7B parameter models based on LLaMA3-8B language model with a context window of 32K tokens.

Insight-V offers 1) a scalable data generation pipeline for long-chain, high-quality reasoning data, 2) a multi-agent system that decomposes visual reasoning tasks into reasoning and summarization, and 3) a two-stage training pipeline to enhance visual reasoning capabilities. Together, these contributions address key challenges in visual reasoning, providing a solid foundation for future research in MLLM reasoning.

Model Architecture

  • Architecture: Pre-trained Oryx-ViT + LLaMA3-8B
  • Data: a mixture of 1.2M image-text data
  • Precision: BFloat16

Hardware & Software

  • Hardware: 64 * NVIDIA Tesla A100
  • Orchestration: HuggingFace Trainer
  • Code: Pytorch

Citation

Downloads last month
0
Safetensors
Model size
8.35B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for THUdyh/Insight-V-Summary-LLaMA3

Finetuned
(441)
this model

Collection including THUdyh/Insight-V-Summary-LLaMA3