GGUF
English
Inference Endpoints
maddes8cht commited on
Commit
bc44f5e
1 Parent(s): 348b8c2

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -63,7 +63,7 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
63
 
64
  There is a bunch of quantized files available. How to choose the best for you:
65
 
66
- # legacy quants
67
 
68
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
69
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
@@ -77,6 +77,7 @@ With a Q6_K you should find it really hard to find a quality difference to the o
77
 
78
 
79
 
 
80
  # Original Model Card:
81
  # ✨ Falcon-7B-Instruct
82
 
@@ -291,6 +292,7 @@ Falcon-7B-Instruct is made available under the Apache 2.0 license.
291
  falconllm@tii.ae
292
 
293
  ***End of original Model File***
 
294
 
295
 
296
  ## Please consider to support my work
 
63
 
64
  There is a bunch of quantized files available. How to choose the best for you:
65
 
66
+ # Legacy quants
67
 
68
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
69
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
 
77
 
78
 
79
 
80
+ ---
81
  # Original Model Card:
82
  # ✨ Falcon-7B-Instruct
83
 
 
292
  falconllm@tii.ae
293
 
294
  ***End of original Model File***
295
+ ---
296
 
297
 
298
  ## Please consider to support my work