wenqiglantz commited on
Commit
e765e2b
1 Parent(s): b57110e

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +27 -0
  2. config.json +3 -0
README.md ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - finetuned
6
+ inference: true
7
+ base_model: mistralai/Mistral-7B-Instruct-v0.2
8
+ model_creator: Mistral AI_
9
+ model_name: Mistral 7B Instruct v0.2
10
+ model_type: mistral
11
+ prompt_template: '<s>[INST] {prompt} [/INST]
12
+ '
13
+ quantized_by: wenqiglantz
14
+ ---
15
+
16
+ # Mistral 7B Instruct v0.2 - GGUF
17
+
18
+ This is a quantized model for `mistralai/Mistral-7B-Instruct-v0.2`. Two quantization methods were used:
19
+ - Q5_K_M: 5-bit, preserves most of the model's performance
20
+ - Q4_K_M: 4-bit, smaller footprints and saves more memory
21
+
22
+ <!-- description start -->
23
+ ## Description
24
+
25
+ This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
26
+
27
+ This model was quantized in Google Colab.
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "model_type": "mistral"
3
+ }