Edit model card

Quantizations of https://huggingface.co/mlabonne/EvolCodeLlama-7b

Inference Clients/UIs


From original readme

This is a codellama/CodeLlama-7b-hf model fine-tuned using QLoRA (4-bit precision) on the mlabonne/Evol-Instruct-Python-1k.

Downloads last month
137
GGUF
Model size
6.74B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.