Could you convert also Qwen/Qwen2-0.5B-Instruct?
#1
by
Felladrin
- opened
Thanks for converting this model, @Minami-su !
The GGUF version of llamafied-Qwen allows for smaller splits than the original model when using gguf-split, which is great for running this model on mobile devices with wllama.
So, now that Qwen/Qwen2-0.5B-Instruct is out, could you kindly convert it too?
Thank you for your attention! The model has now been uploaded as Minami-su/Qwen2-0.5B-Instruct-llamafy.
Awesome! I've converted to GGUF and it's working great! (gguf-Qwen2-0.5B-Instruct-llamafy / gguf-sharded-Qwen2-0.5B-Instruct-llamafy)
Thanks again!
Felladrin
changed discussion status to
closed