The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 8d51b20b-0e75-4cc3-b070-456aa6d7b6c2)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1686, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1065, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 721, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 634, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 548, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 527, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 175, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 121, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2682, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2539, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 8d51b20b-0e75-4cc3-b070-456aa6d7b6c2)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Bittensor Subnet 13 Reddit Dataset
Dataset Summary
This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.
Supported Tasks
The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:
- Sentiment Analysis
- Topic Modeling
- Community Analysis
- Content Categorization
Languages
Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.
Dataset Structure
Data Instances
Each instance represents a single Reddit post or comment with the following fields:
Data Fields
text
(string): The main content of the Reddit post or comment.label
(string): Sentiment or topic category of the content.dataType
(string): Indicates whether the entry is a post or a comment.communityName
(string): The name of the subreddit where the content was posted.datetime
(string): The date when the content was posted or commented.username_encoded
(string): An encoded version of the username to maintain user privacy.url_encoded
(string): An encoded version of any URLs included in the content.
Data Splits
This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.
Dataset Creation
Source Data
Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.
Personal and Sensitive Information
All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.
Considerations for Using the Data
Social Impact and Biases
Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.
Limitations
- Data quality may vary due to the nature of media sources.
- The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
- Temporal biases may exist due to real-time collection methods.
- The dataset is limited to public subreddits and does not include private or restricted communities.
Additional Information
Licensing Information
The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.
Citation Information
If you use this dataset in your research, please cite it as follows:
@misc{SAVE0x02024datauniversereddit_dataset_218,
title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
author={SAVE0x0},
year={2024},
url={https://huggingface.co/datasets/SAVE0x0/reddit_dataset_218},
}
Contributions
To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.
Dataset Statistics
[This section is automatically updated]
- Total Instances: 30818912
- Date Range: 2010-04-28 to 2024-11-22
- Last Updated: 2024-12-17
Data Distribution
- Posts: 4.61%
- Comments: 95.39%
Top 10 Subreddits
For full statistics, please refer to the reddit_stats.json
file in the repository.
Rank | Item | Percentage |
---|---|---|
1 | r/AmItheAsshole | 3.09% |
2 | r/politics | 2.89% |
3 | r/AskReddit | 2.76% |
4 | r/wallstreetbets | 2.72% |
5 | r/teenagers | 2.34% |
6 | r/NoStupidQuestions | 2.15% |
7 | r/nfl | 2.02% |
8 | r/pics | 1.93% |
9 | r/mildlyinfuriating | 1.91% |
10 | r/gaming | 1.85% |
Update History
Date | New Instances | Total Instances |
---|---|---|
2024-12-17 | 2 | 2 |
2024-12-10 | 2 | 4 |
2024-12-02 | 2 | 6 |
2024-11-25 | 30818900 | 30818906 |
2024-11-29 | 2 | 30818908 |
2024-12-06 | 2 | 30818910 |
2024-12-13 | 2 | 30818912 |
- Downloads last month
- 572