--- license: apache-2.0 pipeline_tag: text-generation tags: - finetuned inference: true base_model: mistralai/Mistral-7B-v0.1 model_creator: Mistral AI_ model_name: Mistral 7B v0.1 model_type: mistral prompt_template: '[INST] {prompt} [/INST] ' quantized_by: wenqiglantz --- # Mistral 7B v0.1 - GGUF This is a quantized model for `mistralai/Mistral-7B-v0.1`. Two quantization methods were used: - Q5_K_M: 5-bit, preserves most of the model's performance - Q4_K_M: 4-bit, smaller footprints and saves more memory ## Description This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). This model was quantized in Google Colab.