wenqiglantz commited on
Commit
71c6884
1 Parent(s): e765e2b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -4,24 +4,24 @@ pipeline_tag: text-generation
4
  tags:
5
  - finetuned
6
  inference: true
7
- base_model: mistralai/Mistral-7B-Instruct-v0.2
8
  model_creator: Mistral AI_
9
- model_name: Mistral 7B Instruct v0.2
10
  model_type: mistral
11
  prompt_template: '<s>[INST] {prompt} [/INST]
12
  '
13
  quantized_by: wenqiglantz
14
  ---
15
 
16
- # Mistral 7B Instruct v0.2 - GGUF
17
 
18
- This is a quantized model for `mistralai/Mistral-7B-Instruct-v0.2`. Two quantization methods were used:
19
  - Q5_K_M: 5-bit, preserves most of the model's performance
20
  - Q4_K_M: 4-bit, smaller footprints and saves more memory
21
 
22
  <!-- description start -->
23
  ## Description
24
 
25
- This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B Instruct v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
26
 
27
  This model was quantized in Google Colab.
 
4
  tags:
5
  - finetuned
6
  inference: true
7
+ base_model: mistralai/Mistral-7B-v0.1
8
  model_creator: Mistral AI_
9
+ model_name: Mistral 7B v0.1
10
  model_type: mistral
11
  prompt_template: '<s>[INST] {prompt} [/INST]
12
  '
13
  quantized_by: wenqiglantz
14
  ---
15
 
16
+ # Mistral 7B v0.1 - GGUF
17
 
18
+ This is a quantized model for `mistralai/Mistral-7B-v0.1`. Two quantization methods were used:
19
  - Q5_K_M: 5-bit, preserves most of the model's performance
20
  - Q4_K_M: 4-bit, smaller footprints and saves more memory
21
 
22
  <!-- description start -->
23
  ## Description
24
 
25
+ This repo contains GGUF format model files for [Mistral AI_'s Mistral 7B v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
26
 
27
  This model was quantized in Google Colab.