How do you plan to convert when LLama.cpp doesn't support it yet?
#2
by
Wladastic
- opened
Currently you cannot even run it on llama.cpp, how are you planning to convert it to gguf?
We are waiting for an implementation fro llama.cpp https://github.com/ggerganov/llama.cpp/issues/6868