andysalerno
commited on
Commit
•
ba3caf5
1
Parent(s):
4eb38f5
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
base_model: openchat/openchat-3.5-0106
|
4 |
+
datasets:
|
5 |
+
- berkeley-nest/Nectar
|
6 |
+
---
|
7 |
+
This is openchat/openchat-3.5-0106, tuned with DPO on a subset Nectar. This time with 5000 steps, a full epoch.
|
8 |
+
|
9 |
+
Careful attention was paid to make sure the chat template was followed properly.
|
10 |
+
|
11 |
+
Data selection and filtering:
|
12 |
+
- filtered dataset to only include examples with multiple turns, to preserve strength in multi-turn scenarios
|
13 |
+
- used the 4th ranking response as the "rejected" instead of the 3rd. When I inspected the dataset, I frequently could not find any meaningful difference in quality between the 1st and 3rd ranked responses, so to make the accepted/rejected signal extra clear, I replaced 3rd ranking with 4th ranking.
|
14 |
+
- I filtered out any examples with "good_natured == False". Why? When I inspected examples with "good_natured == False" in the Nectar dataset, I noticed they frequently include refusals from even the top ranking model. So, counter-intuitively, including "bad natured" entries might actually censor the model *more*, since the top responses (as ranked by GPT-4) to these queries tend to be refusals. Not to mention, the quality of the conversations that are "bad natured" tends to be worse in general, in my own opinion.
|
15 |
+
|
16 |
+
Differences from 0.4:
|
17 |
+
- Trained on 5000 steps instead of 500, with a lower learning rate and slower warmup period.
|