Base Fine Tune model
#8
by
bdytx5
- opened
Could you release the base fine tune model without the CoT training? I am writing an article on this. Thanks
Hi, the base model is Llama-3.2-11B-Vision-Instruct: https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct
Xkev
changed discussion status to
closed
Sorry, I mean the "Direct Training" model on the LLaVA 100k dataset (eg just trained on the answers rather than the entire CoT data). from the paper "Here, LLaVA-o1 (with Direct Training) refers to the model trained directly on the original VQA dataset’s Q&A pairs"