Edit model card

FP16 GGUF of the LLaVa_MORE 3.1 8B finetuning mmproj

Original Model Card:

Model Card: LLaVA_MORE-llama_3_1-8B-finetuning

LLaVA-MORE enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.

In this model space, you will find the stage two (finetuning) weights of LLaVA-MORE LLaMA 3.1 8B.

For more information, visit our LLaVA-MORE repository.

Inference

You can try our LLaVA-MORE in the Image-To-Text task by cloning our repository and running the following script.

python -u llava/eval/run_llava.py

Citation

If you make use of our work, please cite our repo:

@misc{cocchi2024llavamore,
title={{LLaVA-MORE: Enhancing Visual Instruction Tuning with LLaMA 3.1}},
author={Cocchi, Federico and Moratelli, Nicholas and Caffagni, Davide and Sarto, Sara and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
url={https://github.com/aimagelab/LLaVA-MORE},
year={2024}
}
Downloads last month
149
GGUF
Model size
312M params
Architecture
clip

16-bit

Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Dataset used to train saphvis/LLaVA_MORE-llama_3_1-8B-finetuning-FP16-mmproj-GGUF