Edit model card

NOTE: You will need a recent build of llama.cpp to run these quants (i.e. at least commit 494c870).

GGUF importance matrix (imatrix) quants for https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta

Layers Context Template
60
32768
<|startoftext|>[INST] <<SYS>>
{instructions}
<</SYS>>

{prompt} [INST]
Downloads last month
11
GGUF
Model size
34.4B params
Architecture
llama

4-bit

5-bit

6-bit

Inference Examples
Inference API (serverless) does not yet support gguf models for this pipeline type.

Model tree for dranger003/UNA-SimpleSmaug-34b-v1beta-iMat.GGUF

Quantized
(3)
this model