Spaces:
Running
on
A10G
Running
on
A10G
Update converting script, please. Llama.cpp added support for Falcon Mamba architecture.
#114
by
NikolayKozloff
- opened
When i try to make q8 gguf for this model (https://huggingface.co/tiiuae/falcon-mamba-7b-instruct), i get this message: Error converting to fp16: b'INFO:hf-to-gguf:Loading model: falcon-mamba-7b-instruct\nERROR:hf-to-gguf:Model FalconMambaForCausalLM is not supported\n'
Llama.cpp release which supports Falcon Mamba is here: https://github.com/ggerganov/llama.cpp/releases/tag/b3612
NikolayKozloff
changed discussion status to
closed