Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowInvalid
Message:      Not an Arrow file
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 67, in _generate_tables
                  batches = pa.ipc.open_stream(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 190, in open_stream
                  return RecordBatchStreamReader(source, options=options,
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 52, in __init__
                  self._open(source, options=options, memory_pool=memory_pool)
                File "pyarrow/ipc.pxi", line 974, in pyarrow.lib._RecordBatchStreamReader._open
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Expected to read 827474256 metadata bytes, but only read 477200261
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 90, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 68, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2068, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 69, in _generate_tables
                  reader = pa.ipc.open_file(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 234, in open_file
                  return RecordBatchFileReader(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/ipc.py", line 110, in __init__
                  self._open(source, footer_offset=footer_offset,
                File "pyarrow/ipc.pxi", line 1058, in pyarrow.lib._RecordBatchFileReader._open
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Not an Arrow file

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card: NbAiLab/distil_raw_ncc_speech_v7

  • Internal dataset created as input for creating Pseudo Labels.

General Information

The dataset is based on ncc_speech_v7 (Norwegian Colossal Corpus - Speech). It is then filtered by only including entries where the text language in Norwegian, and where the source is not from "nrk_translate".

Potential Use Cases

The ncc_speech_v7 corpus can be used for various purposes, including but not limited to:

  • Training Automatic Speech Recognition models.
  • Building text-to-speech systems.
  • Research in speech recognition and natural language processing.
  • Developing language models.

License

The ncc_speech_v7 corpus has a private license.

Citation

The corpus was created and cleaned by Freddy Wetjen, Rolv-Arild Braaten, Angelina Zanardi and Per Egil Kummervold. No publication is so far published based on this copus.

Downloads last month
140