File size: 5,978 Bytes
fc0aeb0 4da3c7d fc0aeb0 346b42b 1d995c8 346b42b fc0aeb0 51a4fb4 fc0aeb0 95f4665 4d81f93 fc0aeb0 95f4665 4d81f93 fc0aeb0 7366986 fc0aeb0 bd75dbb fc0aeb0 e51b63b fc0aeb0 51a4fb4 fc0aeb0 51a4fb4 fc0aeb0 51a4fb4 fc0aeb0 8e0229f fc0aeb0 68455e0 fc0aeb0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
---
license: odc-by
size_categories:
- 100M<n<1B
configs:
- config_name: default
data_files:
- split: train
path: "data/*.parquet"
---
<center>
<img src="https://static.grassfoundation.io">
</center>
# Dataset Summary
This dataset is a filtered collection of posts and comments from Reddit in the year 2024. It has been prepared for research and educational purposes. This dataset includes public web data from various subreddits, providing a snapshot of the discussions happening on the platform during this period. The dataset has been processed to anonymize any personal information found in the posts and comments, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.
### Supported Tasks and Leaderboards
The dataset may be used for a variety of natural language processing (NLP) tasks including:
- Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.
- Language Modeling: Training language models to understand and generate conversational text.
- Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.
- Topic Modeling: Identifying and modeling topics discussed in the posts and comments.
### Languages
The primary language of the dataset is English, as the majority of users post in English. However, posts in other languages may also be present, reflecting the diverse user base of the platform.
# Dataset Structure
### Data Instances
Each data instance represents a post or comment and includes the following fields:
- id: A unique identifier for the comment or post.
- parent_id: The identifier of the parent comment or post. The prefixes are defined as follows:
- t5: subreddit
- t3: post
- t1: comment
- text: The content of the comment or post, with email addresses and IP addresses anonymized.
- url: The URL of the original thread on Reddit.
- date: The timestamp of the comment or post in UTC.
- language: The detected language of the text.
- language_score: The confidence score of the language detection.
- token_count: The number of tokens in the text, as determined by the GPT-2 tokenizer.
- score: The score (upvotes minus downvotes) of the comment or post.
- subreddit: The subreddit where the comment or post was made.
- author: The username of the author of the comment or post.
- media_urls: An array of links to any multimedia included in the comment or post.
### Data Fields
- id: string
- parent_id: string
- text: string
- url: string
- date: string
- language: string
- language_score: float
- token_count: int
- score: int
- subreddit: string
- author: string
- media_urls: array
# Data Preprocessing
The dataset has undergone several preprocessing steps to ensure the quality and privacy of the data:
1. Personal Information Anonymization[CM1] : Email addresses and IP addresses have been replaced with [EMAIL] and [IP] placeholders, respectively.
2. Language Detection: Each text instance has been processed using FastText to detect its language and assign a confidence score.
3. Tokenization: Text instances have been tokenized using the GPT-2 tokenizer to provide a token count.
4. NSFW Filtering: The dataset has been filtered to exclude content marked as NSFW, utilizing the NSFW metadata provided by Reddit's moderation.
### Usage Example:
Here is an example of how to load and use the dataset in Python.
```
from datasets import load_dataset
#Load the dataset
dataset = load_dataset("OpenCo7/UpVoteWeb", split = "train", streaming = True)
```
# Dataset Creation
### Curation Rationale
The Reddit platform hosts public web content about a diverse range of topics, all presented in a conversational format. This has made it a resource in training some of the highest profile LLMs to date. UpVoteWeb is a large, clean pretraining dataset built from this content, for use in developing open source models for research and educational purposes. The dataset is provided for research and educational purposes.
### Source Data
This dataset is a filtered collection of posts and comments from Reddit in the year 2024. Annotations
We augment the scraped data with the language, language_score, and token_count annotations. The language and language_score annotations are generated using FastText and token_count is generated using the gpt2 tokenizer.
### Personal and Sensitive Information
The dataset has been processed to anonymize personal information, specifically email addresses and IP addresses, ensuring the privacy of individuals while maintaining the integrity and context of the data.
# Considerations for Using the Data
### Social Impact of Dataset
With the release of this dataset, we aim to make this development resource available to the community at large.
### Discussion of Biases
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level.
# Additional Information
### Licensing Information
The dataset is released under the Open Data Commons Attribution License (ODC-By) v1.0 [CM2] [license](https://opendatacommons.org/licenses/by/1-0/). Its availability is not an invitation to use any of the information for any illegal or unlawful purpose, or outside the scope of research or educational purposes.
### Future Work
Grass is a network for the acquisition of public web data, and we plan to continue building high quality, structured datasets for use in AI/ML research[CM4] . In addition to future offerings, we will also continue to improve UpVoteWeb in future iterations.
### Citation Information
If you use this dataset in your research or project, please cite it as follows:
```
@dataset{UpVoteWeb,
title = {Reddit Comments and Posts 2024},
year = {2024},
publisher = {Hugging Face},
url = {<https://huggingface.co/datasets/OpenCo7/UpVoteWeb>}
}
```
|