TheBloke commited on
Commit
4dd57ef
1 Parent(s): bd29303

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -34,7 +34,7 @@ This is a DOI repository, created 26th June 2023. It contains the GGML model fil
34
 
35
  The purpose of a DOI repository is to provide a permanent record of a set of files, guaranteed not to change. Therefore the GGML files in this repository will never update.
36
 
37
- For the current version GGML files for Selfee 13B, please check [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML).
38
 
39
  ## Repositories available
40
 
@@ -89,7 +89,6 @@ Refer to the Provided Files table below to see what files use which methods, and
89
  | selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
90
  | selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
91
 
92
-
93
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
94
 
95
  ## How to run in `llama.cpp`
 
34
 
35
  The purpose of a DOI repository is to provide a permanent record of a set of files, guaranteed not to change. Therefore the GGML files in this repository will never update.
36
 
37
+ If you're looking for the latest GGMLs for Selfee 13B GGML, please check [TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML).
38
 
39
  ## Repositories available
40
 
 
89
  | selfee-13b.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB | 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
90
  | selfee-13b.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB | 16.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
91
 
 
92
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
93
 
94
  ## How to run in `llama.cpp`