GGUF version

#1
by maria-ai - opened

Can't make GGUF version. Is it possible?
https://huggingface.co/spaces/ggml-org/gguf-my-repo
ERROR:hf-to-gguf:Model LlavaForConditionalGeneration is not supported

deep vk org

Hi!
Try to use official repo with detailed instructions: https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md

@SpirinEgor

I used it too. But i got errors (i tried to fix it but i can't):
python examples/llava/llava-surgery-v2.py -m llava-saiga-8b

No tensors found. Is this a LLaVA model?
deep vk org

I see the problem

  1. Official repo use the legacy model organisation, i.e. its state dict stores vision tower under model.vision_tower and projector under model.projector. You can see it in proj_criteria method, for example. But there are plenty hard-coded names :(
  2. Our implementation is synchronised with 🤗 and use other mapping in state dict. You can explore it with safetensor viewer on hub.

Therefore, if you want to convert this model to GGUF (and probably any other llava model on hf), you need to create your own llava-surgery script that separate vision tower (Clip), project, and LM (LLaMA). And then convert each part to GGUF version.

Sign up or log in to comment