datajuicer/LLaMA-1B-dj-refine-150B
Text Generation
•
Updated
•
2.42k
•
3
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'meta' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'meta' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'meta' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
meta
dict | text
string | stats
dict | simhash
float64 |
|---|---|---|---|
{}
| "Produced by Suzanne Shell, Sjaani and PG Distributed Proofreaders\n\n\n\n\nTHE HOUSE ON THE BORDERL(...TRUNCATED)
| {"alnum_ratio":0.7730060037,"avg_line_length":53.1530147895,"char_rep_ratio":0.0357377283,"flagged_w(...TRUNCATED)
| 16,879,431,143,088,323,000
|
{}
| "This file was produced from images generously made available by the Bibliotheque nationale de Franc(...TRUNCATED)
| {"alnum_ratio":0.7912897832,"avg_line_length":59.7947893757,"char_rep_ratio":0.0363826187,"flagged_w(...TRUNCATED)
| 3,644,732,410,104,741,400
|
{}
| "Produced by Charles Aldarondo, Charlie Kirschner\nand the Online Distributed Proofreading Team.\n\n(...TRUNCATED)
| {"alnum_ratio":0.7837232004,"avg_line_length":55.9332659252,"char_rep_ratio":0.0307179452,"flagged_w(...TRUNCATED)
| 11,089,683,488,658,866,000
|
{}
| "Produced by Christine De Ryck, Stig M. Valstad, Suzanne L. Shell\nand PG Distributed Proofreaders\n(...TRUNCATED)
| {"alnum_ratio":0.7885623647,"avg_line_length":58.9589676671,"char_rep_ratio":0.032107734,"flagged_wo(...TRUNCATED)
| 6,291,782,967,138,216,000
|
{}
| "Produced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\n\n\n\n\nLA FIAMMETTA\n\nBY\n\n(...TRUNCATED)
| {"alnum_ratio":0.7910396996,"avg_line_length":62.9837189374,"char_rep_ratio":0.0197705904,"flagged_w(...TRUNCATED)
| 6,492,970,723,819,643,000
|
{}
| "Produced by Suzanne Shell, Sjaani and PG Distributed Proofreaders\n\n\n\n\nCARMILLA\n\nJ. Sheridan (...TRUNCATED)
| {"alnum_ratio":0.7786372339,"avg_line_length":47.4699603538,"char_rep_ratio":0.0261364001,"flagged_w(...TRUNCATED)
| 7,426,944,496,859,436,000
|
{}
| "Produced by Suzanne Shell, Danny Wool, Luiz Antonio de Souza,\nElisa Williams, Tonya Allen and PG D(...TRUNCATED)
| {"alnum_ratio":0.7687656563,"avg_line_length":43.1778642555,"char_rep_ratio":0.0327301742,"flagged_w(...TRUNCATED)
| 14,936,655,909,178,157,000
|
{}
| "Produced by Dennis McCarthy\n\n\n\n\n\n\n\n\n\nTHE DIVINE COMEDY\n\nOF DANTE ALIGHIERI\n(1265-1321)(...TRUNCATED)
| {"alnum_ratio":0.7484757649,"avg_line_length":32.4585585586,"char_rep_ratio":0.0269562603,"flagged_w(...TRUNCATED)
| 16,855,008,483,676,178,000
|
{}
| "Produced by Jonathan Ingram and PG Distributed Proofreaders\n\n\n\n\nTHE EULOGIES OF HOWARD.\n\nA V(...TRUNCATED)
| {"alnum_ratio":0.8076707115,"avg_line_length":61.8861003861,"char_rep_ratio":0.0252866391,"flagged_w(...TRUNCATED)
| 15,541,307,282,155,172,000
|
{}
| "Produced by Andrew Heath, Joshua Hutchinson, Audrey Longhurst\nand PG Distributed Proofreaders\n\n\(...TRUNCATED)
| {"alnum_ratio":0.7702943221,"avg_line_length":37.780952381,"char_rep_ratio":0.1020916411,"flagged_wo(...TRUNCATED)
| 8,229,905,533,405,478,000
|
A refined version of Book dataset in RedPajama by Data-Juicer. Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
Notice: Here is a small subset for previewing. The whole dataset is available here (About 91GB).
# global parameters
project_name: 'Data-Juicer-recipes-book'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- alphanumeric_filter:
tokenization: false
min_ratio: 0.55 # <3sigma (0.697)
max_ratio: 0.854 # 3sigma
- average_line_length_filter: # for code
max_len: 500 # >3sigma (364)
- character_repetition_filter:
rep_len: 10
max_ratio: 0.2 # >3sigma (0.12)
- flagged_words_filter:
lang: en
tokenization: true
max_ratio: 0.00047 # 3sigma
- language_id_score_filter: # remove language filter
min_score: 0.2
- maximum_line_length_filter: # for code
max_len: 13381 # 3sigma
- perplexity_filter:
lang: en
max_ppl: 6000 # <3sigma (16516)
- special_characters_filter:
max_ratio: 0.5 # >3sigma (0.32)
- words_num_filter:
lang: en
tokenization: true
min_num: 1000
max_num: 539754 # 3sigma
- word_repetition_filter:
lang: en
tokenization: true
rep_len: 10
max_ratio: 0.194 # 3sigma
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4