reemalyami
commited on
Commit
•
9e1848d
1
Parent(s):
6bac473
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
The AraRoBERTa models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper (
|
2 |
|
3 |
The following are the AraRoBERTa seven dialectal variations:
|
4 |
|
@@ -8,4 +8,19 @@ AraRoBERTa-KU: Kuwait (KU).
|
|
8 |
AraRoBERTa-OM: Oman (OM).
|
9 |
AraRoBERTa-LB: Lebanon (LB).
|
10 |
AraRoBERTa-JO: Jordan (JO).
|
11 |
-
AraRoBERTa-DZ: Algeria (DZ).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The AraRoBERTa models are mono-dialectal Arabic models trained on a country-level dialect. AraRoBERTa uses RoBERTa base config. More details are available in the paper [click](https://aclanthology.org/2022.wanlp-1.24/).
|
2 |
|
3 |
The following are the AraRoBERTa seven dialectal variations:
|
4 |
|
|
|
8 |
AraRoBERTa-OM: Oman (OM).
|
9 |
AraRoBERTa-LB: Lebanon (LB).
|
10 |
AraRoBERTa-JO: Jordan (JO).
|
11 |
+
AraRoBERTa-DZ: Algeria (DZ).
|
12 |
+
|
13 |
+
|
14 |
+
@inproceedings{alyami-al-zaidy-2022-weakly,
|
15 |
+
title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
|
16 |
+
author = "AlYami, Reem and
|
17 |
+
Al-Zaidy, Rabah",
|
18 |
+
booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
|
19 |
+
month = dec,
|
20 |
+
year = "2022",
|
21 |
+
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
|
22 |
+
publisher = "Association for Computational Linguistics",
|
23 |
+
url = "https://aclanthology.org/2022.wanlp-1.24",
|
24 |
+
pages = "260--272",
|
25 |
+
abstract = "The lack of resources such as annotated datasets and tools for low-resource languages is a significant obstacle to the advancement of Natural Language Processing (NLP) applications targeting users who speak these languages. Although learning techniques such as semi-supervised and weakly supervised learning are effective in text classification cases where annotated data is limited, they are still not widely investigated in many languages due to the sparsity of data altogether, both labeled and unlabeled. In this study, we deploy both weakly, and semi-supervised learning approaches for text classification in low-resource languages and address the underlying limitations that can hinder the effectiveness of these techniques. To that end, we propose a suite of language-agnostic techniques for large-scale data collection, automatic data annotation, and language model training in scenarios where resources are scarce. Specifically, we propose a novel data collection pipeline for under-represented languages, or dialects, that is language and task agnostic and of sufficient size for training a language model capable of achieving competitive results on common NLP tasks, as our experiments show. The models will be shared with the research community.",
|
26 |
+
}
|