omost-dolphin-2.9-llama3-8b-4bits is Omost's llama3-8b model with dolphin-2.9 instruct pretraining in nf4.

Downloads last month
229
Safetensors
Model size
4.65B params
Tensor type
BF16
F32
U8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for lllyasviel/omost-dolphin-2.9-llama3-8b-4bits

Quantizations
2 models