Datasets:
Tasks:
Automatic Speech Recognition
Formats:
parquet
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Unable to load dataset
#3
by
arkadyark
- opened
Hello! Trying to load the dataset with HF, I'm getting errors like this one whichever split I try:
>>> dataset = load_dataset("MLCommons/peoples_speech")
No config specified, defaulting to: peoples_speech/clean
Downloading and preparing dataset peoples_speech/clean to /Users/ark/.cache/huggingface/datasets/MLCommons___peoples_speech/clean/1.1.0/ba1d0aa00c7f5a8b1f94c3f3893c846ec7f3b1850cff984d66e7f41825f123db...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/ark/miniconda3/envs/play/lib/python3.8/site-packages/datasets/load.py", line 1742, in load_dataset
builder_instance.download_and_prepare(
File "/Users/ark/miniconda3/envs/play/lib/python3.8/site-packages/datasets/builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "/Users/ark/miniconda3/envs/play/lib/python3.8/site-packages/datasets/builder.py", line 1423, in _download_and_prepare
super()._download_and_prepare(
File "/Users/ark/miniconda3/envs/play/lib/python3.8/site-packages/datasets/builder.py", line 883, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/ark/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--peoples_speech/ba1d0aa00c7f5a8b1f94c3f3893c846ec7f3b1850cff984d66e7f41825f123db/peoples_speech.py", line 137, in _split_generators
n_files_train = self._get_n_files(dl_manager, split="train", config=self.config.name)
File "/Users/ark/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--peoples_speech/ba1d0aa00c7f5a8b1f94c3f3893c846ec7f3b1850cff984d66e7f41825f123db/peoples_speech.py", line 113, in _get_n_files
return int(f.read().strip())
ValueError: invalid literal for int() with base 10: 'version https://git-lfs.github.com/spec/v1\n- size 4482'
I am running on the latest version of the datasets library.