Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
json
Sub-tasks:
named-entity-recognition
Languages:
Turkish
Size:
10K - 100K
License:
File size: 1,346 Bytes
9eafb65 f13ffda 9eafb65 f13ffda |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Turkish-WikiNER
---
# Dataset Card for "turkish-nlp-suite/turkish-wikiNER"
## Dataset Description
- **Repository:** [Turkish-WikiNER](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset)
- **Paper:** [ACL link]()
- **Dataset:** Turkish-WikiNER
- **Domain:** Wiki
- **Number of Labels:** 18
### Dataset Summary
Turkish NER dataset from Wikipedia sentences. 20.000 sentences are sampled and re-annotated from [Kuzgunlar NER dataset](https://data.mendeley.com/datasets/cdcztymf4k/1).
Annotations are done by [Co-one](https://co-one.co/). Many thanks to them for their contributions. This dataset is also used in our brand new spaCy Turkish packages.
### Dataset Instances
An instance of this dataset looks as follows:
```
{
"tokens": ["Duygu", "eve", "gitti", "."],
"tags": ["B-PERSON", "O", "O", "O"]
}
```
### Labels
- CARDINAL
- DATE
- EVENT
- FAC
- GPE
- LANGUAGE
- LAW
- LOC
- MONEY
- NORP
- ORDINAL
- ORG
- PERCENT
- PERSON
- PRODUCT
- QUANTITY
- TIME
- TITLE
- WORK_OF_ART
### Data Split
| name |train|validation|test|
|---------|----:|---------:|---:|
|Turkish-WikiNER|18000| 1000|1000|
### Citation
Coming soon
|