Edit model card

4bit GGUF quantization of TinyLlama-1.1B-intermediate-step-955k-token-2T

I used this script to generate the file using this command:

python make-ggml.py ~/ooba/models/TinyLlama_TinyLlama-1.1B-intermediate-step-955k-token-2T/ --model_type=llama --quants=Q4_K_M

The original model is so small that there is only one safetensors file named model.safetensors, so I had to change that filename to model-00001-of-00001.safetensors to make the script load the model properly.

Downloads last month
3
GGUF
Model size
1.1B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .