PEYMA-ARMAN-Mixed / README.md
AliFartout's picture
Update README.md
c515ab8
|
raw
history blame
2.22 kB
metadata
license: mit
task_categories:
  - token-classification
language:
  - fa
pretty_name: PEYMA-ARMAN-Mixed
size_categories:
  - 10K<n<100K

Mixed Persian NER Dataset (PEYMA-ARMAN)

This dataset is a combination of PEYMA and ARMAN Persian NER datasets. It contains the following named entity tags:

  • Product (PRO)
  • Event (EVE)
  • Facility (FAC)
  • Location (LOC)
  • Person (PER)
  • Money (MON)
  • Percent (PCT)
  • Date (DAT)
  • Organization (ORG)
  • Time (TIM)

Dataset Information

The dataset is divided into three splits: train, test, and validation. Below is a summary of the dataset statistics:

Split B_DAT B_EVE B_FAC B_LOC B_MON B_ORG B_PCT B_PER B_PRO B_TIM I_DAT I_EVE I_FAC I_LOC I_MON I_ORG I_PCT I_PER I_PRO I_TIM O num_rows
Train 1512 1379 1334 13040 446 15762 266 11371 1719 224 1939 4600 2222 4254 1314 21347 308 7160 1736 375 747216 26417
Test 185 218 124 1868 53 2017 27 1566 281 27 245 697 237 511 142 2843 31 1075 345 37 92214 3303
Validation 161 143 192 1539 28 2180 33 1335 172 30 217 520 349 494 54 2923 34 813 136 39 96857 3302

First schema

DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 26417
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 3303
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 3302
    })
})

How to load datset

from datasets import load_dataset
data = load_dataset("AliFartout/PEYMA-ARMAN-Mixed")

Feel free to adjust the formatting according to your needs.