owao commited on
Commit
a0beb9a
1 Parent(s): ea084d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -3,7 +3,7 @@ license: mit
3
  language:
4
  - en
5
  ---
6
- For the limited bandwidth ones <3
7
 
8
  # GGUFs for [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
9
  For a general representation of how quantization level influences output quality, check any model card from TheBloke, or [see this table](https://docs.faraday.dev/models/choose-model#size-vs-perplexity-tradeoff). Note those benchmarks were done on Llama models, and are probably not recent. Also I don't know how the MOE architecture influences those results but you got the idea!
 
3
  language:
4
  - en
5
  ---
6
+ For the ones with limited bandwidth, we don't forget you <3
7
 
8
  # GGUFs for [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
9
  For a general representation of how quantization level influences output quality, check any model card from TheBloke, or [see this table](https://docs.faraday.dev/models/choose-model#size-vs-perplexity-tradeoff). Note those benchmarks were done on Llama models, and are probably not recent. Also I don't know how the MOE architecture influences those results but you got the idea!