The **AraRoBERTa** models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/). The following are the AraRoBERTa seven dialectal variations: * AraRoBERTa-SA: Saudi Arabia (SA) dialect. * AraRoBERTa-EGY: Egypt (EGY) dialect. * AraRoBERTa-KU: Kuwait (KU) dialect. * AraRoBERTa-OM: Oman (OM) dialect. * AraRoBERTa-LB: Lebanon (LB) dialect. * AraRoBERTa-JO: Jordan (JO) dialect. * AraRoBERTa-DZ: Algeria (DZ) dialect # When using the model, please cite our paper: ```python @inproceedings{alyami-al-zaidy-2022-weakly, title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models", author = "AlYami, Reem and Al-Zaidy, Rabah", booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.wanlp-1.24", pages = "260--272", } ``` # Contact **Reem AlYami**: [Linkedin](https://www.linkedin.com/in/reem-alyami/) | |