LSCP-Dataset / README.md
AlirezaF138's picture
Update README.md
ee715e4 verified
metadata
license: cc-by-nc-nd-4.0
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: fa
      num_bytes: 3009706457
      num_examples: 20665964
    - name: normalized
      num_bytes: 2727445009
      num_examples: 20665964
    - name: en
      num_bytes: 1499109169
      num_examples: 13357835
  download_size: 3708178338
  dataset_size: 7236260635
configs:
  - config_name: default
    data_files:
      - split: fa
        path: data/fa-*
      - split: normalized
        path: data/normalized-*
      - split: en
        path: data/en-*
language:
  - fa
  - en
pretty_name: 'LSCP: Enhanced Large Scale Colloquial Persian Language Understanding'

Enhanced Large Scale Colloquial Persian Language Understanding (LSCP)

Original Authors:

  • Hadi Abdi Khojasteh, Institute for Advanced Studies in Basic Sciences (IASBS)
  • Ebrahim Ansari, Charles University, Institute of Formal and Applied Linguistics
  • Mahdi Bohlouli, Petanux GmbH

Supported by: Institute for Advanced Studies in Basic Sciences (IASBS), Charles University, and Petanux GmbH

Licensing: This dataset is made available under the CC BY-NC-ND 4.0 license.

Note on Re-upload

This is a re-uploaded version of the LSCP dataset, which only includes the Persian and English languages. The original dataset with all languages (German, Czech, Italian, and Hindi) is available in the LINDAT/CLARIAH-CZ repository.


Overview

The Enhanced Large Scale Colloquial Persian (LSCP) dataset offers an extensive corpus for colloquial Persian language processing, specifically designed to address the challenges of low-resource languages in NLP. This dataset comprises 120 million sentences derived from 27 million Persian tweets and includes annotations such as parsing trees, part-of-speech tags, sentiment polarity, and multilingual translations (English, German, Czech, Italian, and Hindi).

With LSCP, researchers can explore various NLP tasks within informal Persian language, filling a crucial gap in colloquial Persian language processing and advancing the potential applications of NLP for low-resource languages.

Dataset Content

  • Sentences: 120 million sentences derived from 27 million Persian tweets.
  • Annotations: Parsing trees, part-of-speech tags, sentiment polarity.
  • Translations Available in This Repository: Persian (Farsi) and English portions.
  • License: CC BY-NC-ND 4.0

BibTeX Citation

If you use this dataset, please cite the original work:

@InProceedings{abdikhojasteh:2020:LREC,
  author = {Abdi Khojasteh, Hadi and Ansari, Ebrahim and Bohlouli, Mahdi},
  title  = {LSCP: Enhanced Large Scale Colloquial Persian Language Understanding},
  booktitle = {Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)},
  year      = {2020},
  address   = {Marseille, France},
  publisher = {European Language Resources Association},
  pages     = {6323--6327},
  url       = {https://www.aclweb.org/anthology/2020.lrec-1.776}
}

Getting Started

Downloading the Dataset

This dataset can be accessed from the LINDAT/CLARIAH-CZ repository or directly from this Hugging Face repository for the Persian, English, and normalized splits only.

To load the dataset directly from Hugging Face using the datasets library:

from datasets import load_dataset

ds = load_dataset("AlirezaF138/LSCP-Dataset")  # Available splits: 'fa', 'en', 'normalized'

This command will download and load the Persian, English, and normalized splits of the dataset, allowing you to start using it immediately in your NLP pipeline. For more extensive multilingual data, refer to the full original corpus available on LINDAT/CLARIAH-CZ.

Loading the Data

After downloading and decompressing, you can load the data for each source separately (e.g., monolingual Persian or Persian-English bilingual pairs). This can be done using standard file-reading functions by splitting lines on newline characters.

An example pipeline is also available for use via Google Colab.


Language Specificity

The LSCP dataset reflects linguistic nuances in colloquial Persian, capturing the characteristics of spoken Persian language (e.g., informal contractions, limited vocabulary compared to written forms, and phonetic shifts). This resource is particularly valuable for studying these language-specific phenomena within NLP tasks.

Data Collection and Annotation Process

Data collection for LSCP utilized Twitter's API, selecting a diverse range of tweets from a list of seed accounts and their followers. The dataset was curated by filtering out non-Persian and duplicate tweets and favoring tweets with longer sentence structures for richer content.

Annotation was performed using a two-stage process:

  1. Automatic Annotation: Initial tags, syntactic structures, and translations were generated using StanfordNLP and Google Cloud Translation.
  2. Human Verification: Human annotators verified tags, corrected mislabeled elements, and refined translations to ensure quality.

Licensing and Attribution

This dataset is released under the CC BY-NC-ND 4.0 license, allowing non-commercial use with attribution to the original creators. Redistribution and modification are not permitted under this license.

Please ensure proper attribution by citing the BibTeX reference provided above when using this dataset.


For more information, please refer to the LINDAT/CLARIAH-CZ repository or the original publication in the LREC 2020 proceedings.