neil-code/autotrain-test-summarization-84415142559
Summarization
•
Updated
•
1
Error code: DatasetGenerationError Exception: ArrowNotImplementedError Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table self._build_writer(inferred_schema=pa_table.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize self._build_writer(self.schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer self.pa_writer = self._WRITER_CLASS(self.stream, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__ self.writer = _parquet.ParquetWriter( File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__ File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] | be856d1df3e2ec36 | [
"feat_id",
"feat_topic",
"target",
"text"
] | {} | null | false | null |
This dataset has been automatically processed by AutoTrain for project test-summarization.
The BCP-47 code for the dataset's language is en.
A sample from this dataset looks as follows:
[
{
"feat_id": "train_1087",
"text": "#Person1#: Hello sir, how can I help you?\n#Person2#: Yes, I need this prescription please.\n#Person1#: Let's see. Okay, so 50 mg of Prozac, would you prefer this in capsule or tablet?\n#Person2#: Capsules are fine.\n#Person1#: Okay, you should take 1 capsule 3 times a day. Be sure not to take it on an empty stomach, and also, don't ever mix it with alcohol!\n#Person2#: Yes, I know. It's not the first time I'm taking this! Don't worry, I won't overdose!\n#Person1#: Okay, anything else I can get you?\n#Person2#: Oh, yes, I almost forgot! Can I also get some eye drops and um, some condoms?\n#Person1#: Sure. Darn condoms aren't registered in our system.\n#Person2#: Oh, well that's okay, I'll get some later, thanks. . . Really it's no problem.\n#Person1#: Just hang on there a sec. Can I get a price check on ' Fun Times Ribbed Condoms ' please!",
"target": "#Person1# gives #Person2# #Person2#'s prescription. #Person2# also wants some eye drops and some condoms but is told that darn condoms are registered in their system.",
"feat_topic": "the pharmacy"
},
{
"feat_id": "train_1193",
"text": "#Person1#: Don't be too sad. If you really think that you have no feeling with him, then, in my opinion, getting divorced maybe is the best way to solve the problem. \n#Person2#: I know clearly at the bottom of my heart. I just can't set my mind at rest because of the child. She's little. She cannot understand us and accept such truth. \n#Person1#: Yeah, child is the matter. Don't tell Jenny the truth, only tell her the white lie. When she grows up, you find the suitable opportunity to tell her. \n#Person2#: I see. OK. ",
"target": "#Person1# suggests #Person2# get divorced if #Person2# has no feeling with a man and tell their daughter the white lie.",
"feat_topic": "divorce"
}
]
The dataset has the following fields (also called "features"):
{
"feat_id": "Value(dtype='string', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)",
"feat_topic": "Value(dtype='string', id=None)"
}
This dataset is split into a train and validation split. The split sizes are as follow:
Split name | Num samples |
---|---|
train | 1599 |
valid | 400 |