Lewdiculous commited on
Commit
b4e2311
1 Parent(s): c353b6d

format quant steps

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -14,7 +14,8 @@ tags:
14
  Azure_Dusk-v0.2
15
 
16
  **Description:** <br>
17
- "Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting." – Author. <br>
 
18
 
19
  As described, use the ChatML prompt format. <br>
20
 
@@ -27,7 +28,10 @@ As described, use the ChatML prompt format. <br>
27
  > Original model page: <br>
28
  > https://huggingface.co/Epiculous/Azure_Dusk-v0.2
29
  >
30
- > Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp): <br>
31
- > [b3733](https://github.com/ggerganov/llama.cpp/releases/tag/b3733)
32
-
 
 
 
33
  ![model-image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/n3-g_YTk3FY-DBzxXd28E.png)
 
14
  Azure_Dusk-v0.2
15
 
16
  **Description:** <br>
17
+ "Following up on Crimson_Dawn-v0.2 we have Azure_Dusk-v0.2! Training on Mistral-Nemo-Base-2407 this time I've added significantly more data, as well as trained using RSLoRA as opposed to regular LoRA. Another key change is training on ChatML as opposed to Mistral Formatting." <br>
18
+ – by Author. <br>
19
 
20
  As described, use the ChatML prompt format. <br>
21
 
 
28
  > Original model page: <br>
29
  > https://huggingface.co/Epiculous/Azure_Dusk-v0.2
30
  >
31
+ > Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)-[b3733](https://github.com/ggerganov/llama.cpp/releases/tag/b3733): <br>
32
+ > ```
33
+ > 1. Base⇢ Convert-GGUF(FP16)⇢ Generate-Imatrix-Data(FP16)
34
+ > 2. Base⇢ Convert-GGUF(BF16)⇢ Use-Imatrix-Data(FP16)⇢ Quantize-GGUF(Imatrix-Quants)
35
+ > ```
36
+ >
37
  ![model-image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/n3-g_YTk3FY-DBzxXd28E.png)