--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 45957987986 num_examples: 16896817 download_size: 21312867175 dataset_size: 45957987986 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - text-generation language: - fa pretty_name: 'HmBlogs: A big general Persian corpus' size_categories: - 10M **Authors:** Hamzeh Motahari Khansari, Mehrnoush Shamsfard
**Original Link:** http://nlplab.sbu.ac.ir/hmBlogs-v3/
## Usage This dataset can be used for masked/causal language modeling. You can easily load this dataset like below: ```python from datasets import load_dataset # Load the whole dataset dataset = load_dataset("arxyzan/hmblogs-clean", split="train") # Load a portion by % dataset = load_dataset("arxyzan/hmblogs-clean", split="train[:50%]") # Load a custom shard dataset = load_dataset("arxyzan/hmblogs-clean", data_files=["data/train-00000-of-00046.parquet", "data/train-00001-of-00046.parquet"]) ``` # Citation ```cite @article{DBLP:journals/corr/abs-2111-02362, author = {Hamzeh Motahari Khansari and Mehrnoush Shamsfard}, title = {HmBlogs: {A} big general Persian corpus}, journal = {CoRR}, volume = {abs/2111.02362}, year = {2021}, url = {https://arxiv.org/abs/2111.02362}, eprinttype = {arXiv}, eprint = {2111.02362}, timestamp = {Fri, 05 Nov 2021 15:25:54 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-02362.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```