How this model is converted?
#2
by
soichisumi
- opened
As per my understanding, mistral model isn't supported by llama.cpp currently.
https://github.com/ggerganov/llama.cpp/blob/4ffcdce2ff877ebb683cd217ea38faf20faa5ffe/convert-hf-to-gguf.py#L383-L1842
when we convert model by using convert_hf_to_gguf.py, it returns error like:
$ poetry run python llama.cpp/convert_hf_to_gguf.py /path/to/model --outfile /path/to/model/model.gguf --outtype q8_0
INFO:hf-to-gguf:Loading model: SFR-Embedding-2_R
ERROR:hf-to-gguf:Model MistralModel is not supported
Let me know how you converted this model? I'm struggling to convert model which isnt supported by llama.cpp's script.
Thank you in advance