Datasets:
Tasks:
Token Classification
Modalities:
Text
Formats:
parquet
Languages:
Persian
Size:
10K - 100K
ArXiv:
License:
metadata
language:
- fa
license: mit
size_categories:
- 10K<n<100K
task_categories:
- token-classification
pretty_name: PEYMA-ARMAN-Mixed
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
- name: ner_tags_names
sequence: string
splits:
- name: train
num_bytes: 21842265.610780936
num_examples: 26384
- name: validation
num_bytes: 2810957.964869776
num_examples: 3296
- name: test
num_bytes: 2732470.8156221616
num_examples: 3296
download_size: 4352460
dataset_size: 27385694.391272873
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Mixed Persian NER Dataset (PEYMA-ARMAN)
This dataset is a combination of PEYMA and ARMAN Persian NER datasets. It contains the following named entity tags:
- Product (PRO)
- Event (EVE)
- Facility (FAC)
- Location (LOC)
- Person (PER)
- Money (MON)
- Percent (PCT)
- Date (DAT)
- Organization (ORG)
- Time (TIM)
Dataset Information
The dataset is divided into three splits: train, test, and validation. Below is a summary of the dataset statistics:
Split | B_DAT | B_EVE | B_FAC | B_LOC | B_MON | B_ORG | B_PCT | B_PER | B_PRO | B_TIM | I_DAT | I_EVE | I_FAC | I_LOC | I_MON | I_ORG | I_PCT | I_PER | I_PRO | I_TIM | O | num_rows |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Train | 1512 | 1379 | 1334 | 13040 | 446 | 15762 | 266 | 11371 | 1719 | 224 | 1939 | 4600 | 2222 | 4254 | 1314 | 21347 | 308 | 7160 | 1736 | 375 | 747216 | 26417 |
Test | 185 | 218 | 124 | 1868 | 53 | 2017 | 27 | 1566 | 281 | 27 | 245 | 697 | 237 | 511 | 142 | 2843 | 31 | 1075 | 345 | 37 | 92214 | 3303 |
Validation | 161 | 143 | 192 | 1539 | 28 | 2180 | 33 | 1335 | 172 | 30 | 217 | 520 | 349 | 494 | 54 | 2923 | 34 | 813 | 136 | 39 | 96857 | 3302 |
First schema
DatasetDict({
train: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 26417
})
test: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 3303
})
validation: Dataset({
features: ['tokens', 'ner_tags', 'ner_tags_names'],
num_rows: 3302
})
})
How to load datset
from datasets import load_dataset
data = load_dataset("AliFartout/PEYMA-ARMAN-Mixed")
Feel free to adjust the formatting according to your needs.