Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      JSON parse error: Invalid value. in row 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
                  df = pandas_read_json(f)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
                  return pd.read_json(path_or_buf, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 791, in read_json
                  json_reader = JsonReader(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 905, in __init__
                  self.data = self._preprocess_data(data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/json/_json.py", line 917, in _preprocess_data
                  data = data.read()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 826, in read_with_retries
                  out = read(*args, **kwargs)
                File "/usr/local/lib/python3.9/codecs.py", line 322, in decode
                  (result, consumed) = self._buffer_decode(data, self.errors, final)
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 163, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Skill Annotation Dataset

The Skill Annotation dataset is a German dataset developed by Tao Wu at RWTH University. The data is based on course descriptions from Udemy and the BA (German Federal Employment Agency). It was created to support the Skill Extraction task for German course descriptions. The dataset uses Prodigy for annotation

Dataset Structure

Data Instances

{"text":"C#-Entwickler:in mit Ausbildereignung.","tokens":[{"text":"C","id":0,"start":0,"end":1,"tokenizer_id":157,"disabled":false,"ws":false},{"text":"#","id":1,"start":1,"end":2,"tokenizer_id":26990,"disabled":false,"ws":false},{"text":"-","id":2,"start":2,"end":3,"tokenizer_id":26935,"disabled":false,"ws":false},{"text":"Entwickler","id":3,"start":3,"end":13,"tokenizer_id":23340,"disabled":false,"ws":false},{"text":":","id":4,"start":13,"end":14,"tokenizer_id":26964,"disabled":false,"ws":false},{"text":"in","id":5,"start":14,"end":16,"tokenizer_id":50,"disabled":false,"ws":true},{"text":"mit","id":6,"start":17,"end":20,"tokenizer_id":114,"disabled":false,"ws":true},{"text":"Ausbilder","id":7,"start":21,"end":30,"tokenizer_id":27024,"disabled":false,"ws":false},{"text":"eignung","id":8,"start":30,"end":37,"tokenizer_id":17425,"disabled":false,"ws":false},{"text":".","id":9,"start":37,"end":38,"tokenizer_id":26914,"disabled":false,"ws":true}],"_input_hash":-1229748842,"_task_hash":808267201,"_view_id":"ner_manual","spans":[{"start":0,"end":2,"token_start":0,"token_end":1,"label":"SKILL"}],"answer":"accept","_timestamp":1698691436,"_annotator_id":"2023-10-30_19-41-41","_session_id":"2023-10-30_19-41-41"}

Data Fields

  • text: original text segment
  • tokens: tokens split from the text
  • spans: Annotate tokens with labels

Dataset Creation

Curation Rationale

This Dataset is for skill extraction from German course descriptions.

Source Data

Data Collection and Processing

Collected from Udemy and BA API.

[More Information Needed]

Annotations [optional]

Annotation process

See Prodigy

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Dataset Card Contact

[More Information Needed]

Downloads last month
2