Datasets:
Dataset Viewer issue
The dataset viewer is not working.
Error details:
Error code: ResponseNotFound
some split success, some split error of viewer
Hi ! I'm taking a look at the error
I'm seeing more details about it our logs when the dataset is being stream-converted to parquet
File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1676, in _prepare_split_single for key, record in generator:
File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 798, in wrapped for item in generator(*args, **kwargs):
File \"/tmp/modules-cache/datasets_modules/datasets/qgyd2021--h_novel/1b45962e8e8b0496fd528a12693068d7d0d5c5f0d6399353ca0dad0d2e5cecea/h_novel.py\", line 109, in _generate_examples with open(filename.as_posix(), \"r\", encoding=\"utf-8\") as f:
File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py\", line 74, in wrapper return function(*args, download_config=download_config, **kwargs)
File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py\", line 439, in open return open_files(
File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py\", line 194, in __getitem__ out = super().__getitem__(item)
IndexError: list index out of range
I think the issue comes from the fact that the files are downloaded in _generate_examples
instead of _split_generators
. Could you move the data files downloading part to split_generators
?
In the line with
@lhoestq
's suggestion, I would also recommend to compress each of the folders inside /data
(for example compress the ltxsba
folder into ltxsba.zip
). That would optimize the download time, as recommended in our docs: https://huggingface.co/docs/datasets/v2.14.4/en/upload_dataset#upload-dataset
For text data extensions like .csv, .json, .jsonl, and .txt, we recommend compressing them before uploading to the Hub (to .zip or .gz file extension for example).