Mistral 7B v0.2 - AWQ GGUF

These files are in GGUF format.

The model was converted by the combination of llama.cpp and quantization method AWQ

How to use models in llama.cpp

./main -m Mistral-7b-v0.1-Q2_K.gguf -n 128 --prompt "Once upon a time"
Downloads last month
3
GGUF
Model size
7.24B params
Architecture
llama

2-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.