PEYMA-ARMAN-Mixed / README.md
AliFartout's picture
Upload dataset
b9dfbce verified
metadata
language:
  - fa
license: mit
size_categories:
  - 10K<n<100K
task_categories:
  - token-classification
pretty_name: PEYMA-ARMAN-Mixed
dataset_info:
  features:
    - name: tokens
      sequence: string
    - name: ner_tags
      sequence:
        class_label:
          names:
            '0': B_LOC
            '1': I_DAT
            '2': B_PCT
            '3': I_LOC
            '4': I_PER
            '5': I_MON
            '6': B_ORG
            '7': B_PRO
            '8': B_PER
            '9': O
            '10': I_PCT
            '11': I_ORG
            '12': B_FAC
            '13': B_DAT
            '14': B_TIM
            '15': I_TIM
            '16': I_EVE
            '17': B_MON
            '18': I_PRO
            '19': B_EVE
            '20': I_FAC
    - name: ner_tags_names
      sequence: string
  splits:
    - name: train
      num_bytes: 21618080
      num_examples: 26384
    - name: validation
      num_bytes: 2782070
      num_examples: 3296
    - name: test
      num_bytes: 2706143
      num_examples: 3296
  download_size: 4168673
  dataset_size: 27106293
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Mixed Persian NER Dataset (PEYMA-ARMAN)

This dataset is a combination of PEYMA and ARMAN Persian NER datasets. It contains the following named entity tags:

  • Product (PRO)
  • Event (EVE)
  • Facility (FAC)
  • Location (LOC)
  • Person (PER)
  • Money (MON)
  • Percent (PCT)
  • Date (DAT)
  • Organization (ORG)
  • Time (TIM)

Dataset Information

The dataset is divided into three splits: train, test, and validation. Below is a summary of the dataset statistics:

Split B_DAT B_EVE B_FAC B_LOC B_MON B_ORG B_PCT B_PER B_PRO B_TIM I_DAT I_EVE I_FAC I_LOC I_MON I_ORG I_PCT I_PER I_PRO I_TIM O num_rows
Train 1512 1379 1334 13040 446 15762 266 11371 1719 224 1939 4600 2222 4254 1314 21347 308 7160 1736 375 747216 26417
Test 185 218 124 1868 53 2017 27 1566 281 27 245 697 237 511 142 2843 31 1075 345 37 92214 3303
Validation 161 143 192 1539 28 2180 33 1335 172 30 217 520 349 494 54 2923 34 813 136 39 96857 3302

First schema

DatasetDict({
    train: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 26417
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 3303
    })
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'ner_tags_names'],
        num_rows: 3302
    })
})

How to load datset

from datasets import load_dataset
data = load_dataset("AliFartout/PEYMA-ARMAN-Mixed")

Feel free to adjust the formatting according to your needs.