could you please upload mmproj too?
this is a vision model, right?
Yes KimiVLForConditionalGeneration is indeed a vision model is but apparently not marked as such by our systems. Because of this the model wasn't getting queued to nico1 and skipped mmproj extraction. It also is currently simultaneously queued to kaos and rich1?!? In any case let's let the quantisation complete and then requeue it to nico1 for MMPROJ extraction.
@mradermacher
What was the command again to mark an architecture as vision?
It's a hardcoded list in llmjob.pm, you'll have to tell me, at the moment. I'll add this arch and requeue. Something on my todo would be to generate a fixed list at llama compile time.
and no, i don't know why it would be queued on both boxes unless it was submitted twice.
but i have added Lfm2VlForConditionalGeneration, which is supported, and was not handled as vision model before
@nicoboss it's automated now, the list originally used is /llmjob/share/convert_hf_to_gguf_models.txt, but for speed reasons the preprocessed version in /llmjob/share/convert_hf_to_gguf_models.pm is loaded by llmjob. As always, mostly untested in production.
Oh sorry I forgot that https://github.com/ggml-org/llama.cpp/pull/15051 was text-only. @X5R Sorry we will need to wait for llama.cpp so support vision for KimiVLForConditionalGeneration
seems like its out now? ggml quant has mmproj
https://huggingface.co/ggml-org/Kimi-VL-A3B-Thinking-2506-GGUF
Edit: Vision still broken, but watch out for this PR: https://github.com/ggml-org/llama.cpp/pull/15458
ggml-org has uploaded the mmproj. get it from here https://huggingface.co/ggml-org/Kimi-VL-A3B-Thinking-2506-GGUF/tree/main
but this quant doesn't work with mmproj. broken?
@X5R llama.cppdoesn't support the functionality at the moment, this doesn't make these quants broken. We will redo this model and add the files once support has landed.
Are there any issues remaining besides 'doesn't build for Microsoft'?
Are there any issues remaining besides 'doesn't build for Microsoft'?
That you better ask under https://github.com/ggml-org/llama.cpp/pull/15458. It doesn't look like thy ever fixed the "Invalid number of output tokens" error so there likely is still some work to be done. I don't think it failing the Windows HIP build is any concern as that to me loks more like a CI/CD issue. In any case we won't provide mmproj files until this is merged as nobody on mainline llama.cpp could use them before that anyways.
https://github.com/ggml-org/llama.cpp/pull/15458 is merged now. @mradermacher Please update to the latest version of our fork so we can provide the highly requested MMPROJ files for this model.
https://github.com/ggml-org/llama.cpp/pull/15458 is merged now. @mradermacher Please update to the latest version of our fork so we can provide the highly requested MMPROJ files for this model.
Has this been done? Is there lack of interest for some reason?
Thanks !
Has this been done? Is there lack of interest for some reason?
Sorry for the lack of a response. The mmproj files for this model got uploaded 6 days ago: