ValueError: Failed to load model

#1
by Zauza25 - opened

image.png

I get this error when trying to load this model in WebUI. Other models works. And its not about vram too. Also its working fine via koboldcpp

LWDCLS Research org

I'm not much of an Ooba user, but maybe you can try updating the included llama.cpp version there?

I have the same problem when using llama.cpp as the model loader. I also tried with lammacpp_HF.

image.png

I get the below error with lammacpp_HF.

ERROR Could not load the model because a tokenizer in Transformers format was not found.

LWDCLS Research org
edited Oct 6

Use KoboldCpp to run GGUF models or try updating the llama.cpp in Ooba.

Sign up or log in to comment