Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
code
Size:
10K - 100K
License:
NonMatchingSplitsSizesError from huggingface dataset for POJ-104
#1
by
bstee615
- opened
Originally posted at https://github.com/microsoft/CodeXGLUE/issues/135, I think this is the more proper place to get help:
I get this error when I try to load your POJ-104 dataset from huggingface using load_dataset
.
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=18878686, num_examples=32000, dataset_name='code_x_glue_cc_clone_detection_poj104'), 'recorded': SplitInfo(name='train', num_bytes=20179075, num_examples=32500, dataset_name='code_x_glue_cc_clone_detection_poj104')}, {'expected': SplitInfo(name='validation', num_bytes=5765303, num_examples=8000, dataset_name='code_x_glue_cc_clone_detection_poj104'), 'recorded': SplitInfo(name='validation', num_bytes=6382433, num_examples=8500, dataset_name='code_x_glue_cc_clone_detection_poj104')}]
As far as I can tell, the dataset expects to load 500 fewer examples than the downloaded files contain. I attached a notebook which reproduces the issue:
- Notebook (run with Python 3.8): test_poj104.zip
- Output: test_poj104.pdf
Could you fix the issue so that we can load the dataset correctly without ignore_verifications=True
?
face the same bug for loading the dataset
Thanks for reporting @benjis and @CM .
I am fixing it.
Fixed by #3.
albertvillanova
changed discussion status to
closed