--- dataset_info: features: - name: id dtype: string - name: content dtype: string - name: score dtype: int64 - name: poster dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: title dtype: string - name: permalink dtype: string - name: nsfw dtype: bool - name: updated dtype: bool - name: new dtype: bool splits: - name: train num_bytes: 50994948 num_examples: 98828 download_size: 31841070 dataset_size: 50994948 configs: - config_name: default data_files: - split: train path: data/train-* --- --- Generated Part of README Below --- ## Dataset Overview The goal is to have an open dataset of [r/uwaterloo](https://www.reddit.com/r/uwaterloo/) submissions. I'm leveraging PRAW and the Reddit API to get downloads. There is a limit of 1000 in an API call and limited search functionality, so this is run hourly to get new submissions. ## Creation Details This dataset was created by [alvanlii/dataset-creator-reddit-uwaterloo](https://huggingface.co/spaces/alvanlii/dataset-creator-reddit-uwaterloo) ## Update Frequency The dataset is updated hourly with the most recent update being `2024-08-29 14:00:00 UTC+0000` where we added **0 new rows**. ## Licensing [Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25: [License information] ## Opt-out To opt-out of this dataset please make a pull request with your justification and add your ids in filter_ids.json 1. Go to [filter_ids.json](https://huggingface.co/spaces/reddit-tools-HF/dataset-creator-reddit-bestofredditorupdates/blob/main/filter_ids.json) 2. Click Edit 3. Add your ids, 1 per row 4. Comment with your justification