GGUF
5 datasets
Composer
MosaicML
llm-foundry
maddes8cht commited on
Commit
2eee51e
1 Parent(s): cee06ee

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -54,7 +54,7 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
54
 
55
  There is a bunch of quantized files available. How to choose the best for you:
56
 
57
- # legacy quants
58
 
59
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
60
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
@@ -68,6 +68,7 @@ With a Q6_K you should find it really hard to find a quality difference to the o
68
 
69
 
70
 
 
71
  # Original Model Card:
72
  # MPT-7B-Chat
73
 
@@ -253,6 +254,7 @@ Please cite this model using the following format:
253
  ```
254
 
255
  ***End of original Model File***
 
256
 
257
 
258
  ## Please consider to support my work
 
54
 
55
  There is a bunch of quantized files available. How to choose the best for you:
56
 
57
+ # Legacy quants
58
 
59
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
60
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
 
68
 
69
 
70
 
71
+ ---
72
  # Original Model Card:
73
  # MPT-7B-Chat
74
 
 
254
  ```
255
 
256
  ***End of original Model File***
257
+ ---
258
 
259
 
260
  ## Please consider to support my work