llama Failed to load fp16 & q8
#1
by
money82
- opened
llama_model_load: error loading model: create_tensor: tensor 'output.weight' not found
llama_load_model_from_file: failed to load model
this model uses tied word embeddings; newer llama.cpp should support it