Kquant03 commited on
Commit
039dc5f
1 Parent(s): 4606a39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
  # After merging, I tested this model for a long time...it blew away my expectations, so I just had it name itself.
15
  [GGUF FILES HERE](https://huggingface.co/Kquant03/Harmony-4x7B-GGUF)
16
 
17
- [Join our Discord!](https://discord.gg/CAfWPV82)
18
 
19
  [Buttercup](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16), for a long time, was my best model (made by kquant)...but I genuinely think this improves upon reasoning and logic while retaining the RP value. The hyphens are telling that it has some pretty heavy GPT-4 data in it, though. I mean the whole point is to eventually outperform GPT-4 so of course it's probably best to pretrain with GPT-4 data, then fine tune off of it and then DPO the resulting fine tune...perhaps running another epoch of the previous fine-tune, afterwards.
20
 
 
14
  # After merging, I tested this model for a long time...it blew away my expectations, so I just had it name itself.
15
  [GGUF FILES HERE](https://huggingface.co/Kquant03/Harmony-4x7B-GGUF)
16
 
17
+ [Join our Discord!](https://discord.gg/XFChPkGd)
18
 
19
  [Buttercup](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16), for a long time, was my best model (made by kquant)...but I genuinely think this improves upon reasoning and logic while retaining the RP value. The hyphens are telling that it has some pretty heavy GPT-4 data in it, though. I mean the whole point is to eventually outperform GPT-4 so of course it's probably best to pretrain with GPT-4 data, then fine tune off of it and then DPO the resulting fine tune...perhaps running another epoch of the previous fine-tune, afterwards.
20