Edit model card

CODE

Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3

Repository Overview

This repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.

Training Strategy

  • Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.
  • Fine-tuning: LLM is LoRA fine-tuned. Only the vision-backbone (CLIP) is kept frozen.
  • Note: The repository contains merged weights.

Key Components

Training Data

Download It As

git lfs install
git clone https://huggingface.co/MBZUAI/LLaVA-Phi-3-mini-4k-instruct

License

This project is available under the MIT License.

Contributions

Contributions are welcome! Please 🌟 our repository LLaVA++ if you find this model useful.


Downloads last month
3,245
Safetensors
Model size
4.14B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MBZUAI/LLaVA-Phi-3-mini-4k-instruct

Finetunes
1 model

Spaces using MBZUAI/LLaVA-Phi-3-mini-4k-instruct 4

Collection including MBZUAI/LLaVA-Phi-3-mini-4k-instruct