0xSero/GLM-4.7-REAP-50
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#GLM-4.7-REAP-50-GGUF for quants to appear.
I am also interested in this model.
I noticed it isn't listed on the status page or in the uploaded models list—is there an issue occurring?
Thank you for all your wonderful work.
It failed but someone cleared it. It seems like GLM 4.7 is now supported, so I will try to queue it again, if it fails I will ask someone to check it out.
As always, you can check the progress in the queue
Please remind me to check on it in a few hours, I am preparing for the exams and I will for sure forget about it =(
Thank you, I appreciate it!
@nicoboss I think we have some problems with this model. For some reason it's not getting quanted, says some tensor is missing, but the GLM 4.7 is supported because other models were quanting. Pls help =)
It’s a dryrun error. This means the model successfully converted to a GGUF but our automated tests failed due to llama.cpp being unable to load the model. This probably happens because this is not just any GLM 4.7 model but some layers were removed to make it smaller. Because this is not an issue during covert but loading the converted model it seems unlikely that this can be fixed without submitting a PR to llama.cpp to add support for reaped GLM 4.7 models. It makes no sense for us to quantize this model if it is not currently supported by llama.cpp.
load_tensors: loading model tensors, this can take a while... (mmap = false)
llama_model_load: error loading model: missing tensor 'blk.92.attn_norm.weight'
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model 'GLM-4.7-REAP-50.gguf~'
Thank you for confirming.
I saw that someone had created a GGUF version for REAP40, so I thought it might be possible for 50 as well, but I see that's not the case.
I appreciate your time and help.
