Edit model card

Mistral 7B Instruct v0.2 - GGUF

This is a quantized model for mistralai/Mistral-7B-Instruct-v0.2. Two quantization methods were used:

  • Q5_K_M: 5-bit, recommended, low quality loss.
  • Q4_K_M: 4-bit, recommended, offers balanced quality.

Description

This repo contains GGUF format model files for Mistral AI_'s Mistral 7B Instruct v0.2.

This model was quantized in Google Colab. Notebook link is here.

Downloads last month
21
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for wenqiglantz/Mistral-7B-Instruct-v0.2-GGUF

Quantized
(86)
this model