Edit model card

Mistral 7B v0.1 - GGUF

This is a quantized model for mistralai/Mistral-7B-v0.1. Two quantization methods were used:

  • Q5_K_M: 5-bit, preserves most of the model's performance
  • Q4_K_M: 4-bit, smaller footprints and saves more memory

Description

This repo contains GGUF format model files for Mistral AI_'s Mistral 7B v0.1.

This model was quantized in Google Colab.

Downloads last month
13
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wenqiglantz/Mistral-7B-v0.1-GGUF

Quantized
(168)
this model