AraRoBERTa-EGY / README.md
reemalyami's picture
Update README.md
a9eea20
|
raw
history blame
2.52 kB

The AraRoBERTa models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper click.

The following are the AraRoBERTa seven dialectal variations:

AraRoBERTa-SA: Saudi Arabia (SA). AraRoBERTa-EGY: Egypt (EGY). AraRoBERTa-KU: Kuwait (KU). AraRoBERTa-OM: Oman (OM). AraRoBERTa-LB: Lebanon (LB). AraRoBERTa-JO: Jordan (JO). AraRoBERTa-DZ: Algeria (DZ).

When using the model, please cite our paper:

@inproceedings{alyami-al-zaidy-2022-weakly,
    title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
    author = "AlYami, Reem  and
      Al-Zaidy, Rabah",
    booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.wanlp-1.24",
    pages = "260--272",
    abstract = "The lack of resources such as annotated datasets and tools for low-resource languages is a significant obstacle to the advancement of Natural Language Processing (NLP) applications targeting users who speak these languages. Although learning techniques such as semi-supervised and weakly supervised learning are effective in text classification cases where annotated data is limited, they are still not widely investigated in many languages due to the sparsity of data altogether, both labeled and unlabeled. In this study, we deploy both weakly, and semi-supervised learning approaches for text classification in low-resource languages and address the underlying limitations that can hinder the effectiveness of these techniques. To that end, we propose a suite of language-agnostic techniques for large-scale data collection, automatic data annotation, and language model training in scenarios where resources are scarce. Specifically, we propose a novel data collection pipeline for under-represented languages, or dialects, that is language and task agnostic and of sufficient size for training a language model capable of achieving competitive results on common NLP tasks, as our experiments show. The models will be shared with the research community.",
}

Contacts

Reem AlYami: Linkedin | reem.yami@kfupm.edu.sa | yami.m.reem@gmail.com