can you do the same for my model?

#1
by KnutJaegersberg - opened

I tried to gptq / awq my model here but I did not get it to work yet.
could you help out?
https://huggingface.co/KnutJaegersberg/Deacon-34B-qlora

Oh yeah that looks nice as well!

@KnutJaegersberg OK I will take a look this evening

Any chance you could merge the PEFT on to the base model and upload that? It'll make my life easier as I've not had to use my LoRA merging code for a while and need to check it still works with latest versions of PEFT, Transformers, etc.

I'll upload my merged model, it will take a while. A day or so.

OK don't worry, I can look at it before then

I don't really know what happened. I added the merge adapter option in autotrain advanced, but still an adapter and a model came out of it. Maybe the llamafied model is already the merge. I tried it locally, sometimes it works without merging the adapter, sometimes not. Perhaps it's also something wrong in textgen-webui.

Didn't work, did it?

Sign up or log in to comment