Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Languages:
Turkish
Size:
10K - 100K
License:
metadata
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Turkish-WikiNER
Dataset Card for "turkish-nlp-suite/turkish-wikiNER"
Dataset Description
- Repository: Turkish-WikiNER
- Paper: ACL link
- Dataset: Turkish-WikiNER
- Domain: Wiki
- Number of Labels: 18
Dataset Summary
Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from Kuzgunlar NER dataset. Annotations are done by Co-one. Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.
Dataset Instances
An instance of this dataset looks as follows:
{
"tokens": ["Duygu", "eve", "gitti", "."],
"tags": ["B-PERSON", "O", "O", "O"]
}
Labels
- CARDINAL
- DATE
- EVENT
- FAC
- GPE
- LANGUAGE
- LAW
- LOC
- MONEY
- NORP
- ORDINAL
- ORG
- PERCENT
- PERSON
- PRODUCT
- QUANTITY
- TIME
- TITLE
- WORK_OF_ART
Data Split
name | train | validation | test |
---|---|---|---|
Turkish-WikiNER | 18000 | 1000 | 1000 |
Citation
Coming soon