GGUF
sft
maddes8cht commited on
Commit
07e4be0
1 Parent(s): bdd52f8

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -33,13 +33,12 @@ So this solution ensures improved performance and efficiency over legacy Q4_0, Q
33
 
34
  ---
35
  # Brief
36
- I have a problem with the OpenAssistant falcon *sft* models
37
 
38
  * [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
39
  * [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
40
  * [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
41
 
42
- which currently prevents me from re-quantizing these models. It is not clear to me at the moment if this problem can be solved.
43
 
44
 
45
  ---
 
33
 
34
  ---
35
  # Brief
36
+ Finally got the OpenAssistant falcon *sft* models working again
37
 
38
  * [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
39
  * [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
40
  * [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
41
 
 
42
 
43
 
44
  ---