Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ Please note that these GGMLs are **not compatbile with llama.cpp**. Please see b
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
|
34 |
-
* [
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|
|
|
31 |
## Repositories available
|
32 |
|
33 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/starchat-beta-GPTQ)
|
34 |
+
* [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/starchat-beta-GGML)
|
35 |
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HuggingFaceH4/starchat-beta)
|
36 |
|
37 |
<!-- compatibility_ggml start -->
|