Error in recognizing the file gemma-2b-it.Q8_0.gguf, updated llmam cpp and made new binaries

#1
by auralodyssey - opened

Im facing the following error

ggml_init_cublas: CUDA_USE_TENSOR_CORES: yes
ggml_init_cublas: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3070 Laptop GPU, compute capability 8.6, VMM: yes
llama_model_loader: loaded meta data with 21 key-value pairs and 164 tensors from E:\LLM\Mixtral\gemma-2b-it.q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-2b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.block_count u32 = 18
llama_model_loader: - kv 4: gemma.embedding_length u32 = 2048
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 8
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 1
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["", "", "", "", ...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 19: general.quantization_version u32 = 2
llama_model_loader: - kv 20: general.file_type u32 = 7
llama_model_loader: - type f32: 37 tensors
llama_model_loader: - type q8_0: 127 tensors
error loading model: unknown model architecture: 'gemma'
llama_load_model_from_file: failed to load model
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 0 | VSX = 0 |
Traceback (most recent call last):
File "e:\LLM\Mixtral\Priyansh\llm_processing.py", line 12, in
llm = Llama(model_path=model_path, n_ctx=4096, n_threads=11, n_gpu_layers=8) #changed model loading method for gemma
File "E:\LLM\Mixtral\mixtral_llm\lib\site-packages\llama_cpp\llama.py", line 962, in init
self._n_vocab = self.n_vocab()
File "E:\LLM\Mixtral\mixtral_llm\lib\site-packages\llama_cpp\llama.py", line 2274, in n_vocab
return self._model.n_vocab()
File "E:\LLM\Mixtral\mixtral_llm\lib\site-packages\llama_cpp\llama.py", line 251, in n_vocab
assert self.model is not None
AssertionError

Any ideas why?

@auralodyssey it seems there is an issue with both converting Gemma to GGUF, and using the original GGUF by Google to quantize models: https://github.com/ggerganov/llama.cpp/issues/5635

However, this error is different. If you pull the latest changes from the main in llama.cpp and re-build it again you shouldn't see this error. (at least I won't, it's just the quality that is a bit strange for the moment)

@auralodyssey it seems there is an issue with both converting Gemma to GGUF, and using the original GGUF by Google to quantize models: https://github.com/ggerganov/llama.cpp/issues/5635

However, this error is different. If you pull the latest changes from the main in llama.cpp and re-build it again you shouldn't see this error. (at least I won't, it's just the quality that is a bit strange for the moment)

I have done the updating and rebuilding. I had cloned using git pull and then remade the build folder but still the error is occurring. Am i missing something? Even LM Studio is showing unsupported architecture.

@auralodyssey all I can say for now only the original GGUF model provided by Google works, any other convert or quantization don't work. I think we have to wait for some newer changes in llama.cpp and followup in the issue I shared to confirm everything works

This should be fine now with good responses

MaziyarPanahi changed discussion status to closed

Sign up or log in to comment