opiyu commited on
Commit
8d89535
1 Parent(s): fb5875b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -3,6 +3,10 @@ license: mit
3
  language:
4
  - en
5
  ---
6
-
7
  # LHK_DPO_v1
8
- HanNayeoniee/LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
 
 
 
 
 
 
3
  language:
4
  - en
5
  ---
 
6
  # LHK_DPO_v1
7
+ Here are some GGUF quantized files for LHK_DPO_v1.
8
+ Don't hesitate to make requests for more quants while I still have the bfloat16 model on my drive (as I regularly have to make some space)!
9
+
10
+ [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1) is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
11
+
12
+ For more info on quantization perplexity loss, check this table: https://docs.faraday.dev/models/choose-model#size-vs-perplexity-tradeoff