Converted version of CodeLlama-70b to 4-bit using bitsandbytes. For more information about the model, refer to the model's page.

Impact on performance

In the following figure, we can see the impact on the performance of a set of models relative to the required RAM space. It is noticeable that the quantized models have equivalent performance while providing a significant gain in RAM usage.

constellation

Downloads last month
7
Safetensors
Model size
35.8B params
Tensor type
F32
·
BF16
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Collection including cmarkea/CodeLlama-70b-hf-4bit