teknium commited on
Commit
0404451
1 Parent(s): f1f62e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -28,9 +28,9 @@ Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained o
28
 
29
  The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
30
 
31
- This is the SFT + DPO version of Mixtral Hermes 2, we will also be providing an SFT only version, for people to find which works best for them.
32
 
33
- ## Huge shout out to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
34
 
35
  # Table of Contents
36
  1. [Example Outputs](#example-outputs)
 
28
 
29
  The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.
30
 
31
+ This is the SFT + DPO version of Mixtral Hermes 2, we are also be providing an SFT only version, for people to find which works best for them, which can be found here: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
32
 
33
+ ## We are grateful to Together.ai for sponsoring our compute during the many experiments both training Mixtral and working on DPO!
34
 
35
  # Table of Contents
36
  1. [Example Outputs](#example-outputs)