turkish-wikiNER / README.md
BayanDuygu's picture
added read me
f13ffda
|
raw
history blame
1.35 kB
metadata
language:
  - tr
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
task_categories:
  - token-classification
task_ids:
  - named-entity-recognition
pretty_name: Turkish-WikiNER

Dataset Card for "turkish-nlp-suite/turkish-wikiNER"

Dataset Description

Dataset Summary

Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from Kuzgunlar NER dataset. Annotations are done by Co-one. Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.

Dataset Instances

An instance of this dataset looks as follows:

{
"tokens": ["Duygu", "eve", "gitti", "."],
"tags": ["B-PERSON", "O", "O", "O"]
}

Labels

  • CARDINAL
  • DATE
  • EVENT
  • FAC
  • GPE
  • LANGUAGE
  • LAW
  • LOC
  • MONEY
  • NORP
  • ORDINAL
  • ORG
  • PERCENT
  • PERSON
  • PRODUCT
  • QUANTITY
  • TIME
  • TITLE
  • WORK_OF_ART

Data Split

name train validation test
Turkish-WikiNER 18000 1000 1000

Citation

Coming soon