germeval14_ner / README.md
lschoen's picture
add author hint
05f64fe verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: source
      dtype: string
    - name: source_date
      dtype: string
    - name: tokens
      sequence: string
    - name: ner_label
      sequence: int64
    - name: ner_tag
      sequence: string
    - name: nested_ner_label
      sequence: int64
    - name: nested_ner_tag
      sequence: string
  splits:
    - name: train
      num_bytes: 18729899
      num_examples: 24002
    - name: validation
      num_bytes: 1721290
      num_examples: 2200
    - name: test
      num_bytes: 3993690
      num_examples: 5100
  download_size: 4900445
  dataset_size: 24444879
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: mit
task_categories:
  - token-classification
language:
  - de
tags:
  - GermEval
pretty_name: GermEval 2014 NER challenge dataset
size_categories:
  - 1M<n<10M

GermEval 14 NER dataset

This dataset includes the actual NER tags (B-PER, B-LOC, etc.) besides the labels (0, 1, 2, ...) and requires no code execution when loading. Structured as follow

DatasetDict({
    train: Dataset({
        features: ['id', 'source', 'source_date', 'tokens', 'ner_label', 'ner_tag', 'nested_ner_label', 'nested_ner_tag'],
        num_rows: 24002
    })
    validation: Dataset({
        features: ['id', 'source', 'source_date', 'tokens', 'ner_label', 'ner_tag', 'nested_ner_label', 'nested_ner_tag'],
        num_rows: 2200
    })
    test: Dataset({
        features: ['id', 'source', 'source_date', 'tokens', 'ner_label', 'ner_tag', 'nested_ner_label', 'nested_ner_tag'],
        num_rows: 5100
    })
})

Citation

Based on the data from the GermEval14 NER challenge, please cite the original authors when using this dataset:

@article{benikovaGermEval2014Named,
  title = {{{GermEval}} 2014 {{Named Entity Recognition Shared Task}}: {{Companion Paper}}},
  author = {Benikova, Darina and Biemann, Chris and Kisselew, Max and Pado, Sebastian},
  abstract = {This paper describes the GermEval 2014 Named Entity Recognition (NER) Shared Task workshop at KONVENS. It provides background information on the motivation of this task, the data-set, the evaluation method, and an overview of the participating systems, followed by a discussion of their results. In contrast to previous NER tasks, the GermEval 2014 edition uses an extended tagset to account for derivatives of names and tokens that contain name parts. Further, nested named entities had to be predicted, i.e. names that contain other names. The eleven participating teams employed a wide range of techniques in their systems. The most successful systems used state-of-theart machine learning methods, combined with some knowledge-based features in hybrid systems.},
  langid = {english},
}