Model Card for Aya-23-8B

This is the 4bit Quantized version of the Aya 23 8B. It uses 4x less memory.

Downloads last month
84
Safetensors
Model size
4.65B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.