Datasets:
The fastest way to load and process this dataset
The dataset in amazingly large. I tried to set the num_proc
to load_dataset
method, in hope that it will utilize multiprocess when generating splits. However, I found out no matter I set the value to 2
or to 50
, the speed remains unchanged. On my computer it's around 179 examples/s, and it needs 33 hours to generate the training split!
I manually set the num_proc at file retrieval_rag.py
L264, the multiprocess seems to happen at builder.py
in datasets library:
with Pool(num_proc) as pool:
for job_id, done, content in iflatmap_unordered(pool, self._prepare_split_single, args_per_job):
if done:
...
I'm wondering whether the implementation is problematic?
Thanks for reporting, @drt .
In our latest release of the datasets
library (2.7), we implemented the support for multiprocessing when building the dataset. See release notes: https://github.com/huggingface/datasets/releases/tag/2.7.0
Please, update your datasets library and tell us if this increases your speed.
pip install -U datasets
Multiprocessing is useful when a dataset is split in multiple files, however this dataset is made of one single file. I'm afraid you can't really parallelize the dataset loading using the original data format.