kawine commited on
Commit
c7f2ba3
1 Parent(s): fca6e44

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -0
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ tags:
6
+ - human-feedback
7
+ - rlhf
8
+ - preferences
9
+ - reddit
10
+ size_categories:
11
+ - 100K<n<1M
12
+ language:
13
+ - en
14
+ ---
15
+ # 🚢 Stanford Human Preferences Dataset (SHP)
16
+
17
+ ## Summary
18
+
19
+ SHP is a dataset of **220K human preferences** over Reddit comments in 18 different subject areas, from cooking to legal advice.
20
+ It is primarily intended to be used for training reward models for RLHF and automatic evaluation models for NLG.
21
+ Each example is a Reddit post and a pair of top-level comments for that post, where one comment is more preferred by Reddit users.
22
+
23
+ Specifically, given a post P and two comments (A,B) we only included the preference A > B in the dataset if
24
+ 1. A was written *no later than* B.
25
+ 2. Despite being written later, A has a score that is at least 2 times as high as B's.
26
+ 3. Both comments have a score >= 2 and the post has a score >= 10.
27
+ 4. The post is a self-post (i.e., a body of text and not a link to another page) made before 2023, was not edited, and is not NSFW (over 18).
28
+ 5. Neither comment was made by a deleted user, a moderator, or the post creator. The post was not made by a deleted user or moderator.
29
+
30
+ Since comments made earlier get more visibility, the first condition is needed to ensure that A's higher score is not the result of a first-mover advantage.
31
+ Since the comment score is also a noisy estimate of the comment's utility, the second and third conditions were enforced to ensure that the preference is genuine.
32
+ We did not allow edited posts, since that opens the possibility of one comment being written with more context than another.
33
+
34
+ SHP compares favorably to other preference datasets.
35
+ The input in SHP contains X bits of [FLANT5-usable information](https://icml.cc/virtual/2022/oral/16634) about the preference label, compared to only Y bits in [Anthropic's HH-RLHF dataset](https://huggingface.co/datasets/Anthropic/hh-rlhf).
36
+ This may be due to the aggregate human preferences in SHP being easier to predict than the individual human preferences in the Anthropic data, as well as the strict data filtering described above.
37
+
38
+
39
+ ## Files
40
+
41
+ SHP contains a train, validation, and test split for comments scraped from 18 different subreddits:
42
+ `askculinary`, `askhr`, `askdocs`, `askanthropology`, `asksciencefiction`, `askacademia`, `askengineers`, `legaladvice`, `explainlikeimfive`, `askbaking`, `askphysics`, `askscience`, `askphilosophy`, `askvet`, `changemyview`, `askcarguys`, `askhistorians`, `asksocialscience`.
43
+
44
+ We chose subreddits based on:
45
+ 1. whether they were well-known (subscriber count >= 50K)
46
+ 2. whether they were actively moderated
47
+ 3. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
48
+
49
+ The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
50
+ Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%, but fairly close to it.
51
+
52
+ | | train | validation | test | total |
53
+ |--------------------|---------:|-----------:|-----:|-------:
54
+ | Number of Examples | 198556 | 10555 |10454 | 219565|
55
+
56
+
57
+ ## Data Structure
58
+
59
+ Here's an example from the training data:
60
+ ```
61
+ {
62
+ `post_id`: "qt3nxl",
63
+ `domain`: "askculinary_train",
64
+ `upvote_ratio`: 0.98,
65
+ `history`: "What's the best way to disassemble raspberries? Like this, but down to the individual seeds: https:\/\/i.imgur.com\/Z0c6ZKE.jpg\n\nI've been pulling them apart with tweezers and it's really time consuming. I have about 10 pounds to get through this weekend.",
66
+ `c_root_id_A`: "hkh25lp",
67
+ `c_root_id_B`: "hkh25sc",
68
+ `created_at_utc_A`: 1636822110,
69
+ `created_at_utc_B`: 1636822112,
70
+ `score_A`: 166,
71
+ `score_B`: 340,
72
+ `human_ref_A`: "Raspberry juice will make a bright stain at first, but in a matter of weeks it will start to fade away to almost nothing. It is what is known in the natural dye world as a fugitive dye, it will fade even without washing or exposure to light. I hope she gets lots of nice photos of these stains on her dress, because soon that will be all she has left of them!",
73
+ `human_ref_B`: "Pectinex, perhaps?\n\nIt's an enzyme that breaks down cellulose. With citrus, you let it sit in a dilute solution of pectinex overnight to break down the connective tissues. You end up with perfect citrus supremes. If you let the raspberries sit for a shorter time, I wonder if it would separate the seeds the same way...?\n\nHere's an example: https:\/\/www.chefsteps.com\/activities\/perfect-citrus-supreme",
74
+ `labels`: 0,
75
+ `seconds_difference`: 2.0,
76
+ `score_ratio`: 2.0481927711
77
+ }
78
+ ```
79
+
80
+ where the fields are:
81
+ - ```post_id```: the ID of the Reddit post (string)
82
+ - ```domain```: the subreddit and split the example is drawn from, separated by an underscore (string)
83
+ - ```upvote_ratio```: the upvote ratio of the Reddit post (float)
84
+ - ```history```: the post title concatented to the post body (string)
85
+ - ```c_root_id_A```: the ID of comment A (string)
86
+ - ```c_root_id_B```: the ID of comment B (string)
87
+ - ```created_at_utc_A```: utc timestamp of when comment A was created (integer)
88
+ - ```created_at_utc_B```: utc timestamp of when comment B was created (integer)
89
+ - ```score_A```: score of comment A (integer)
90
+ - ```score_B```: score of comment B (integer)
91
+ - ```human_ref_A```: text of comment A (string)
92
+ - ```human_ref_B```: text of comment B (string)
93
+ - ```labels```: the preference label -- it is 1 if A is preferred to B; 0 if B is preferred to A. This was randomized such that the label distribution is roughly 50/50. (integer)
94
+ - ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be positive) (integer)
95
+ - ```score_ratio```: the ratio score_A:score B (will be >= 2) (float)
96
+
97
+
98
+ ## Disclaimer
99
+
100
+ Although we filtered out posts with NSFW (over 18) content, some of the data may contain discriminatory or harmful language.
101
+ The data does not reflect the views of the dataset creators.
102
+ Please only engage with the data in accordance with your own personal risk tolerance.
103
+
104
+ Reddit users on these subreddits are also not necessarily representative of the broader population, which one should keep in mind before using any models trained on this data.
105
+ As always, remember to evaluate!
106
+
107
+
108
+ ## FAQs
109
+
110
+ **Q**: *I'm trying to train a FLAN-T5/T5 model on these preferences, but the loss won't converge. Help!*
111
+
112
+ **A**: The most likely problem is that you're feeding the post text AND one or both comments as input, which is a lot larger than the 512 tokens these models can support.
113
+ Even though they use relative position embeddings, in our experience, this is not helpful when training a preference/reward model on this data.
114
+ To avoid this, truncate the post text as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however). If this is still over 512 tokens, simply skip the example.
115
+ This should allow you to still train on most of the examples and get a preference model that is still ~75% accurate at predicting human preferencess.
116
+ We are currently training a preference model on this data and will make it available shortly.
117
+
118
+ **Q**: *Why did you use threshold the score ratio rather than the score difference when filtering preferences?*
119
+
120
+ **A**: Some Reddit posts get far less traffic than others, which means their comments have lower absolute scores.
121
+ An absolute difference threshold would disproportionately exclude comments from these posts, a kind of bias that we didn't want to introduce.
122
+
123
+ **Q**: *Did you scrape every post on those 18 subreddits?*
124
+
125
+ **A**: No. Reddit makes it very difficult to get anything beyond the top 1000 posts.
126
+ We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using the Reddit search function.
127
+ By doing this recursively, we scraped up to 7500 post IDs for each subreddit and then used the AsyncPRAW API to scrape the top 50 comments from each post.
128
+ We limited the scraping to 50 comments per post because the number of comments per post is Pareto-distributed, and we did not want a relatively small number of posts dominating the data.
129
+
130
+ **Q**: *How did you preprocess the text?*
131
+
132
+ **A**: We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded ("CMV" to "Change my view that").
133
+ In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
134
+
135
+
136
+ ## Contact
137
+
138
+ Please contact kawin@stanford.edu if you have any questions about the data.