Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
michaelr7 commited on
Commit
fc0aeb0
·
verified ·
1 Parent(s): aed656e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +159 -3
README.md CHANGED
@@ -1,3 +1,159 @@
1
- ---
2
- license: odc-by
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: odc-by
3
+ size_categories:
4
+ - 100M<n<1B
5
+ ---
6
+ # Dataset Summary
7
+
8
+ This dataset is a filtered collection of posts and comments from Reddit in the year 2024. It has been prepared for research and educational purposes. This dataset includes public web data from various subreddits, providing a snapshot of the discussions happening on the platform during this period. The dataset has been processed to anonymize personal information, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.
9
+
10
+ ### Supported Tasks and Leaderboards
11
+
12
+ The dataset may be used for a variety of natural language processing (NLP) tasks including:
13
+
14
+ - Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.
15
+
16
+ - Language Modeling: Training language models to understand and generate conversational text.
17
+
18
+ - Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.
19
+
20
+ - Topic Modeling: Identifying and modeling topics discussed in the posts and comments.
21
+
22
+ ### Languages
23
+
24
+ The primary language of the dataset is English, as the majority of users post in English. However, posts in other languages may also be present, reflecting the diverse user base of the platform.
25
+
26
+ # Dataset Structure
27
+
28
+ ### Data Instances
29
+
30
+ Each data instance represents a post or comment and includes the following fields:
31
+
32
+ - id: A unique identifier for the comment or post.
33
+
34
+ - parent_id: The identifier of the parent comment or post. The prefixes are defined as follows:
35
+
36
+ - t5: subreddit
37
+
38
+ - t3: post
39
+
40
+ - t1: comment
41
+
42
+ - text: The content of the comment or post, with email addresses and IP addresses anonymized.
43
+
44
+ - url: The URL of the original thread on Reddit.
45
+
46
+ - date: The timestamp of the comment or post in UTC.
47
+
48
+ - language: The detected language of the text.
49
+
50
+ - language_score: The confidence score of the language detection.
51
+
52
+ - token_count: The number of tokens in the text, as determined by the GPT-2 tokenizer.
53
+
54
+ - score: The score (upvotes minus downvotes) of the comment or post.
55
+
56
+ - subreddit: The subreddit where the comment or post was made.
57
+
58
+ - author: The username of the author of the comment or post.
59
+
60
+ ### Data Fields
61
+
62
+ - id: string
63
+
64
+ - parent_id: string
65
+
66
+ - text: string
67
+
68
+ - url: string
69
+
70
+ - date: string
71
+
72
+ - language: string
73
+
74
+ - language_score: float
75
+
76
+ - token_count: int
77
+
78
+ - score: int
79
+
80
+ - subreddit: string
81
+
82
+ - author: string
83
+
84
+ # Data Preprocessing
85
+
86
+ The dataset has undergone several preprocessing steps to ensure the quality and privacy of the data:
87
+
88
+ 1. Personal Information Anonymization[CM1] : Email addresses and IP addresses have been replaced with [EMAIL] and [IP] placeholders, respectively.
89
+
90
+ 2. Language Detection: Each text instance has been processed using FastText to detect its language and assign a confidence score.
91
+
92
+ 3. Tokenization: Text instances have been tokenized using the GPT-2 tokenizer to provide a token count.
93
+
94
+ 4. NSFW Filtering: The dataset has been filtered to exclude content marked as NSFW, utilizing the NSFW metadata provided by Reddit's moderation.
95
+
96
+ ### Usage Example:
97
+
98
+ Here is an example of how to load and use the dataset in Python.
99
+
100
+ from datasets import load_dataset
101
+
102
+
103
+
104
+ #Load the dataset
105
+ ```
106
+ dataset = load_dataset("PLACEHOLDER_NAME")
107
+ ```
108
+ #Display the first example
109
+ ```
110
+ print(dataset['train'][0])
111
+ ```
112
+
113
+ # Dataset Creation
114
+
115
+ ### Curation Rationale
116
+
117
+ The Reddit platform hosts public web content about a diverse range of topics, all presented in a conversational format. This has made it a resource in training some of the highest profile LLMs to date. NAME is a large, clean pretraining dataset built from this content, for use in developing open source models for research and educational purposes. The dataset is provided for research and educational purposes.
118
+
119
+ ### Source Data
120
+
121
+ This dataset is a filtered collection of posts and comments from Reddit in the year 2024. Annotations
122
+
123
+ We augment the scraped data with the language, language_score, and token_count annotations. The language and language_score annotations are generated using FastText and token_count is generated using the gpt2 tokenizer.
124
+
125
+ ### Personal and Sensitive Information
126
+
127
+ The dataset has been processed to anonymize personal information, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.
128
+
129
+ # Considerations for Using the Data
130
+
131
+ ### Social Impact of Dataset
132
+
133
+ With the release of this dataset, we aim to make an invaluable development resource available to the community at large.
134
+
135
+ ### Discussion of Biases
136
+
137
+ Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level.
138
+
139
+ # Additional Information
140
+
141
+ ### Licensing Information
142
+
143
+ The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 [CM2] [license](https://opendatacommons.org/licenses/by/1-0/). Its availability is not an invitation to use any of the information for any illegal or unlawful purpose, or outside the scope of research or educational purposes.
144
+
145
+ ### Future work
146
+
147
+ Grass is a network for the acquisition of public web data, and we plan to continue building high quality, structured datasets for use in AI/ML research[CM4] . In addition to future offerings, we will also continue to improve NAME in future iterations.
148
+
149
+ ### Citation Information
150
+
151
+ If you use this dataset in your research or project, please cite it as follows:
152
+ ```
153
+ @dataset{PLACEHOLDER_NAME,
154
+ title = {Reddit Comments and Posts 2024},
155
+ year = {2024},
156
+ publisher = {Hugging Face},
157
+ url = {<https://huggingface.co/datasets/PLACEHOLDER/reddit_comments_2024>}
158
+ }
159
+ ```