Mistral 7B v0.1 - GGUF
This is a quantized model for mistralai/Mistral-7B-v0.1
. Two quantization methods were used:
- Q5_K_M: 5-bit, preserves most of the model's performance
- Q4_K_M: 4-bit, smaller footprints and saves more memory
Description
This repo contains GGUF format model files for Mistral AI_'s Mistral 7B v0.1.
This model was quantized in Google Colab.
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for wenqiglantz/Mistral-7B-v0.1-GGUF
Base model
mistralai/Mistral-7B-v0.1