Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Persian
Size:
10K - 100K
ArXiv:
License:
File size: 3,266 Bytes
03b2d76 cb1259e 03b2d76 c6e2dbb cb1259e b9dfbce cb1259e b9dfbce cb1259e b9dfbce cb1259e b9dfbce cb1259e b9dfbce cb1259e 03b2d76 7b11cf5 c515ab8 7b11cf5 03b2d76 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
language:
- fa
license: mit
size_categories:
- 10K<n<100K
task_categories:
- token-classification
pretty_name: PEYMA-ARMAN-Mixed
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': B_LOC
'1': I_DAT
'2': B_PCT
'3': I_LOC
'4': I_PER
'5': I_MON
'6': B_ORG
'7': B_PRO
'8': B_PER
'9': O
'10': I_PCT
'11': I_ORG
'12': B_FAC
'13': B_DAT
'14': B_TIM
'15': I_TIM
'16': I_EVE
'17': B_MON
'18': I_PRO
'19': B_EVE
'20': I_FAC
- name: ner_tags_names
sequence: string
splits:
- name: train
num_bytes: 21618080
num_examples: 26384
- name: validation
num_bytes: 2782070
num_examples: 3296
- name: test
num_bytes: 2706143
num_examples: 3296
download_size: 4168673
dataset_size: 27106293
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Mixed Persian NER Dataset (PEYMA-ARMAN)
This dataset is a combination of [PEYMA](https://arxiv.org/abs/1801.09936) and [ARMAN](https://github.com/HaniehP/PersianNER) Persian NER datasets. It contains the following named entity tags:
- Product (PRO)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Person (PER)
- Money (MON)
- Percent (PCT)
- Date (DAT)
- Organization (ORG)
- Time (TIM)
## Dataset Information
The dataset is divided into three splits: train, test, and validation. Below is a summary of the dataset statistics:
| Split | B_DAT | B_EVE | B_FAC | B_LOC | B_MON | B_ORG | B_PCT | B_PER | B_PRO | B_TIM | I_DAT | I_EVE | I_FAC | I_LOC | I_MON | I_ORG | I_PCT | I_PER | I_PRO | I_TIM | O | num_rows |
|------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|
| Train | 1512 | 1379 | 1334 | 13040 | 446 | 15762 | 266 | 11371 | 1719 | 224 | 1939 | 4600 | 2222 | 4254 | 1314 | 21347 | 308 | 7160 | 1736 | 375 | 747216 | 26417 |
| Test | 185 | 218 | 124 | 1868 | 53 | 2017 | 27 | 1566 | 281 | 27 | 245 | 697 | 237 | 511 | 142 | 2843 | 31 | 1075 | 345 | 37 | 92214 | 3303 |
| Validation | 161 | 143 | 192 | 1539 | 28 | 2180 | 33 | 1335 | 172 | 30 | 217 | 520 | 349 | 494 | 54 | 2923 | 34 | 813 | 136 | 39 | 96857 | 3302 |
### First schema
```python
DatasetDict({
train: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 26417
})
test: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 3303
})
validation: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 3302
})
})
```
### How to load datset
```python
from datasets import load_dataset
data = load_dataset("AliFartout/PEYMA-ARMAN-Mixed")
```
Feel free to adjust the formatting according to your needs. |