Transformers
4 languages
falcon
sft
text-generation-inference
TheBloke commited on
Commit
120109a
1 Parent(s): 8eff3d4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -39,7 +39,7 @@ Currently these files will also not work with code that previously supported Fal
39
 
40
  ## Repositories available
41
 
42
- * [2, 3, 4, 5, 6, 8-bit GGCT models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-40b-sft-mix-1226-GGML)
43
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
44
 
45
  ## Prompt template
 
39
 
40
  ## Repositories available
41
 
42
+ * [2, 3, 4, 5, 6, 8-bit GGCC models for CPU+GPU inference](https://huggingface.co/TheBloke/falcon-40b-sft-mix-1226-GGML)
43
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
44
 
45
  ## Prompt template