reemalyami commited on
Commit
c144fb1
1 Parent(s): a6c5b1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -13
README.md CHANGED
@@ -11,16 +11,18 @@ AraRoBERTa-JO: Jordan (JO).
11
  AraRoBERTa-DZ: Algeria (DZ).
12
 
13
 
14
- (Xie, Allaire, and Grolemund 2018)
15
-
16
- <div id="refs" class="references csl-bib-body hanging-indent">
17
-
18
- <div id="ref-xie2018" class="csl-entry">
19
-
20
- Xie, Yihui, J. J. Allaire, and Garrett Grolemund. 2018. *R Markdown: The
21
- Definitive Guide*. Boca Raton, Florida: Chapman; Hall/CRC.
22
- <https://bookdown.org/yihui/rmarkdown>.
23
-
24
- </div>
25
-
26
- </div>
 
 
 
11
  AraRoBERTa-DZ: Algeria (DZ).
12
 
13
 
14
+ ```ruby
15
+ @inproceedings{alyami-al-zaidy-2022-weakly,
16
+ title = "Weakly and Semi-Supervised Learning for {A}rabic Text Classification using Monodialectal Language Models",
17
+ author = "AlYami, Reem and
18
+ Al-Zaidy, Rabah",
19
+ booktitle = "Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)",
20
+ month = dec,
21
+ year = "2022",
22
+ address = "Abu Dhabi, United Arab Emirates (Hybrid)",
23
+ publisher = "Association for Computational Linguistics",
24
+ url = "https://aclanthology.org/2022.wanlp-1.24",
25
+ pages = "260--272",
26
+ abstract = "The lack of resources such as annotated datasets and tools for low-resource languages is a significant obstacle to the advancement of Natural Language Processing (NLP) applications targeting users who speak these languages. Although learning techniques such as semi-supervised and weakly supervised learning are effective in text classification cases where annotated data is limited, they are still not widely investigated in many languages due to the sparsity of data altogether, both labeled and unlabeled. In this study, we deploy both weakly, and semi-supervised learning approaches for text classification in low-resource languages and address the underlying limitations that can hinder the effectiveness of these techniques. To that end, we propose a suite of language-agnostic techniques for large-scale data collection, automatic data annotation, and language model training in scenarios where resources are scarce. Specifically, we propose a novel data collection pipeline for under-represented languages, or dialects, that is language and task agnostic and of sufficient size for training a language model capable of achieving competitive results on common NLP tasks, as our experiments show. The models will be shared with the research community.",
27
+ }
28
+ ```