The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
OKReddit α (Alpha)
Dataset Summary
OKReddit is a filtered collection of 5TiB of reddit submissions and comments from 2005 to 2023. This dataset has been prepared for research or archival purposes.
This dataset includes (obviously) a filtered list of subreddits.
- Curated by: KaraKaraWitch
- Funded by: Recursal.ai
- Shared by: KaraKaraWitch
- Language(s) (NLP): Mainly English. Other languages are available at smaller sizes.
- License:
(Not available in alpha!). Refer to Licensing Information for data license.Scripts
folder are Apache 2.0
NOTE: While the dataset is currently usable, it's marked as alpha:
- There are some stray filters that I should really add but is not included in this release.
- The current data structure is quite poor for processing.
- Some [deleted] users text are empty.
- Some subreddits were not processed properly/fully due to python exceptions.
We are currently addressing these issues. (Re-running the fixed script) In the next release, these shall be fixed.
Dataset Sources
- Source Data: Academic Torrents by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)
Supported Tasks and Leaderboards
The dataset may be used for a variety of natural language processing (NLP) tasks including:
Text Classification: Classifying comments and posts into categories based on sentiment, topic, or subreddit.
Language Modeling: Training language models to understand and generate conversational text.
Sentiment Analysis: Analyzing the sentiment of comments and posts across different subreddits and topics.
Topic Modeling: Identifying and modeling topics discussed in the posts and comments.
Languages
The primary language of the dataset is English, as the majority of redditors are English educated. However, posts in other languages may also be present in smaller quanitites.
Dataset Structure
Data Instances
Each data instance repreasents a submission thread within a subreddit.
thread_id
: The submission thread ID. Inclusive of thet3_
that reddit uses to mark an id as a thread.https://reddit.com/r/<SUBREDDIT>/comments/<THREAD_ID>/
subreddit
: The name of the subreddit. Case-insensitive. Reddit just redirects you to the correct-cased subreddit.namedconversation
: A OpenAI "compatible" conversation:from
: The author username that posted the content. It is notuser
,system
,model
!content
: The reddit markdown posted.
- The first value of
namedconversation
is the submission. The rest are replies. - If a submission is marked as NSFW / Mature, a
[R-18]
is appended to the front of the title. submission
/comments
: The raw submission and comments respectively.
Unsure or Confused? We have provided a real sample below.
Data Sample
Sample Thread
{
"thread_id": "t3_of7h2",
"subreddit": "Gaben",
"namedconversation": [
{
"from": "[deleted]",
"content": "[13 Jan 2012, 07:01:07] TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"\n\nLink: half-life.wikia.com"
},
{
"from": "clydethefrog",
"content": "[15 Jan 2012, 18:01:06] That's my password too"
},
{
"from": "Dunge",
"content": "[29 Feb 2012, 02:02:34] \"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad"
},
{
"from": "captainregularr",
"content": "[13 Jan 2012, 14:01:14] Did you know gaben makes me gaben my gaben?"
},
{
"from": "Turellio",
"content": "[13 Jan 2012, 17:01:53] that's what gaben gaben"
},
{
"from": "captainregularr",
"content": "[13 Jan 2012, 17:01:05] I gaben to gaben's demands."
},
{
"from": "RagingRetard",
"content": "[13 Jan 2012, 17:01:49] Oh, quit your incessant gaben."
}
],
"submission": {
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": null,
"type": null
},
"author": null,
"title": "TIL Half-Life 2's source code was hacked because the hacker guessed Gabe's password, which was \"gaben\"",
"score": 23,
"created": 1326440407.0,
"id": "of7h2",
"flags": "",
"link_flair": null,
"url": "http://half-life.wikia.com/wiki/Half-Life_2_Beta#Source_code_leak",
"text": "",
"removed": [],
"cross": []
},
"comments": [
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "clydethefrog",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "That's my password too",
"score": 1,
"created": "1326652326",
"id": "c3hge04",
"parent_id": "t3_of7h2",
"thread_id": "t3_of7h2",
"flags": "A",
"children": []
},
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "Dunge",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "\"Gembe was led into believing that Valve wanted to employ him as an in-house security auditor. He was to be offered a flight to the USA and was to be arrested on arrival by the FBI.\"\n\nWow that's sad",
"score": 3,
"created": "1330483894",
"id": "c3w2ulz",
"parent_id": "t3_of7h2",
"thread_id": "t3_of7h2",
"flags": "A",
"children": []
},
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "captainregularr",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "Did you know gaben makes me gaben my gaben?",
"score": 5,
"created": "1326463514",
"id": "c3gsfkx",
"parent_id": "t3_of7h2",
"thread_id": "t3_of7h2",
"flags": "A",
"children": [
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "Turellio",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "that's what gaben gaben",
"score": 3,
"created": "1326476873",
"id": "c3guihp",
"parent_id": "t1_c3gsfkx",
"thread_id": "t3_of7h2",
"flags": "A",
"children": [
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "captainregularr",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "I gaben to gaben's demands.",
"score": 5,
"created": "1326477005",
"id": "c3guje0",
"parent_id": "t1_c3guihp",
"thread_id": "t3_of7h2",
"flags": "AE",
"children": [
{
"sub": {
"name": "Gaben",
"id": "2scx1",
"subs": -1,
"type": ""
},
"author": {
"name": "RagingRetard",
"uid": "",
"create": -1,
"flair": null,
"patreon": false,
"premium": false
},
"text": "Oh, quit your incessant gaben.",
"score": 2,
"created": "1326477409",
"id": "c3gulzh",
"parent_id": "t1_c3guje0",
"thread_id": "t3_of7h2",
"flags": "A",
"children": []
}
]
}
]
}
]
}
]
}
Dataset Creation
Curation Rationale
Reddit has graced the world with it's unique design and way of comments (Extremely nested comment chains).
However, we have noted that it's possible to flatten comment chains into 1 long conversation without the comversation looking too strange or out of place.
Additionally since Reddit goes back to 2005, it has a lot of data that is waiting to be explored and used.
(Plus, recent Large Language Models have been using reddit for quite some time!)
After reviewing UpVoteWeb's curation practices, we have taken upon ourselves to develop a more open dataset.
Recognising variety is the spice of life, we only pruned subreddits that do not contain useful data based on 3 metrics:
- Engagement (How active a submission is to the amount of comments recieved. Total Comments / Total submissions)
- Richness (The amount of media submissions to all the submissions squared)
- Diversity (The sum of unique authors from comments and submissions over the number of unique authors from submissions)
In practice, it looks something like this:
# ...
engagement = comment_data["comments"] / submission_data["submissions"]
richness = (submission_data["media"] / submission_data["submissions"]) ** 2
diversity = (
comment_data["authors"] + submission_data["authors"]
) / submission_data["submissions"]
We additionally employ some baseline metrics such as minimum submission, submission authors, comment and comment authors.
In practice:
if (
stats_data["submission"]["authors"] < 70 # Total unique authors
or stats_data["comment"]["authors"] < 20 # Total unique commentors
or stats_data["submission"]["submissions"] < 450 # Total submissions count
or stats_data["comment"]["comments"] < 585 # Total comments count
):
# Skip the subreddit
With the baseline and these 3 metrics, we filter out a host of low quality subreddits. By this stage, we have successfully selected ~62K subreddits that are of good to high quality.
After filtering subreddits, we then filter submissions and comments by the following:
- We skip submission threads with less than 5 comments
- We prune comments that have less than -4 score. (Score from reddit defaults.)
- For submissions with more than 50 comments, we drop all comments that have a nested depth of 6. (Inspired from a RES filter)
- If a comment chain drops below 0, we prune the rest of the content.
- For child comments that have a parent from (2,3,4), they are additionally pruned.
For more infomation, refer to the scripts provided alongside this repo. Specifically RedditScoring.py
for subreddit filtering and RedditThreader.py
for per thread filtering.
Source Data
This dataset is a filtered collection of posts and comments from the beginning of reddit to up to end of 2023.
Considerations for Using the Data
Social Impact of Dataset
With the release of this dataset, we aim to make this development resource available to the community at large.
Discussion of Biases
We've decided not to censor out NSFW or toxic content. This allows for better toxic analysis and a varied dataset.
Additional Information
Recursal's Vision
To make AI accessible to everyone, regardless of language, or economical status
This is the collective goal of the RWKV Open Source foundation
and Recursal AI
, the commercial entity who backs it.
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.
About RWKV
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.
You can find out more about the project, and latest models, at the following
About Recursal AI
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.
The following dataset/models provided here, is part of that commitment.
You can find out more about recursal AI here
Licensing Information
Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators.
Additionally, this dataset is for research and archival purposes only.
Citation Information
If you use this dataset in your research or project, please cite it as follows:
@dataset{OKReddit,
title = {OKReddit},
year = {2024},
publisher = {KaraKaraWitch},
url = {<https://huggingface.co/datasets/KaraKaraWitch/OKReddit>}
}
Additionally, pleace cite the following source bibtex as well.
@article{,
title= {Reddit comments/submissions 2005-06 to 2023-12},
journal= {},
author= {stuck_in_the_matrix, Watchful1, RaiderBDev},
year= {},
url= {},
abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev.
These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps
The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift},
keywords= {reddit},
terms= {},
license= {},
superseded= {}
}
...
Qngnfrg Mra
- XnenXnenJvgpu @ erphefny.nv FRCG 24
- Nalguvat, naq rirelguvat pna or pbyyngrq vagb qngnfrg.
- Gb orpbzr bar jvgu gur qngn, bar zhfg or jvyyvat gb bcra gurve zvaqf.
- Ab znggre ubj phefrq vg znl frra, gurer'f nyjnlf zber jbefr guvatf bhg gurer.
- NCV Yvzvgf, Cnljnyyf, Fhofpevcgvbaf naq bgure yvzvgngvbaf ner n "fhttrfgvba".
- Vs nyy ryfr snvyf, cebkvrf naq nppbhagf.
- Bar funyy arire cehar pbagrag jvgubhg eulzr be ernfba.
- Hayrff vg'f pyrneyl NV-Fybc. Lbh'er serr gb tb unz.
- Qngnfrgf ner Rireterra, arire qrpvqhbhf.
- Ohvyq gb fpnyr, arire fvatyr-guernqrq.
- Downloads last month
- 3,868