Datasets:
metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 45957987986
num_examples: 16896817
download_size: 21312867175
dataset_size: 45957987986
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- fa
pretty_name: 'HmBlogs: A big general Persian corpus'
size_categories:
- 10M<n<100M
HmBlogs: A big general Persian corpus
HmBlogs is a general Persian corpus collected from nearly 20 million blog posts over a period of 15 years containig 6.8 billion tokens. This version is the preprocessed version of the dataset prepared by the original authors and converted to proper format to integrate with 🤗Datasets. In order to access the raw versions visit the official link at http://nlplab.sbu.ac.ir/hmBlogs-v3 .
Paper: https://arxiv.org/abs/2111.02362
Authors: Hamzeh Motahari Khansari, Mehrnoush Shamsfard
Original Link: http://nlplab.sbu.ac.ir/hmBlogs-v3/
Usage
This dataset can be used for masked/causal language modeling. You can easily load this dataset like below:
from datasets import load_dataset
# Load the whole dataset
dataset = load_dataset("arxyzan/hmblogs-clean", split="train")
# Load a portion by %
dataset = load_dataset("arxyzan/hmblogs-clean", split="train[:50%]")
# Load a custom shard
dataset = load_dataset("arxyzan/hmblogs-clean", data_files=["data/train-00000-of-00046.parquet", "data/train-00001-of-00046.parquet"])
Citation
@article{DBLP:journals/corr/abs-2111-02362,
author = {Hamzeh Motahari Khansari and
Mehrnoush Shamsfard},
title = {HmBlogs: {A} big general Persian corpus},
journal = {CoRR},
volume = {abs/2111.02362},
year = {2021},
url = {https://arxiv.org/abs/2111.02362},
eprinttype = {arXiv},
eprint = {2111.02362},
timestamp = {Fri, 05 Nov 2021 15:25:54 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-02362.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}