mukel commited on
Commit
e23d053
1 Parent(s): 5decdad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -5
README.md CHANGED
@@ -1,5 +1,29 @@
1
- ---
2
- license: other
3
- license_name: mnpl
4
- license_link: https://mistral.ai/licenses/MNPL-0.1.md
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: mnpl
4
+ license_link: https://mistral.ai/licenses/MNPL-0.1.md
5
+ tags:
6
+ - code
7
+ language:
8
+ - code
9
+ ---
10
+
11
+ # Pure quantizations of `Codestral-22B-v0.1` for [mistral.java](https://github.com/mukel/mistral.java).
12
+
13
+ In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
14
+ A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:
15
+
16
+ ```
17
+ ./quantize --pure ./Codestral-22B-v0.1-F32.gguf ./Codestral-22B-v0.1-Q4_0.gguf Q4_0
18
+ ```
19
+
20
+ Original model: [https://huggingface.co/mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
21
+
22
+ ****Note that this model does not support a System prompt.**
23
+
24
+
25
+ Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
26
+ - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
27
+ - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
28
+
29
+