Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ model-index:
|
|
23 |
|
24 |
# deberta-base-nepali
|
25 |
|
26 |
-
This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [DeBERTa](https://arxiv.org/abs/2006.03654) for language modeling.
|
27 |
|
28 |
It achieves the following results on the evaluation set:
|
29 |
|
|
|
23 |
|
24 |
# deberta-base-nepali
|
25 |
|
26 |
+
This model is pre-trained on [nepalitext](https://huggingface.co/datasets/Sakonii/nepalitext-language-model-dataset) dataset consisting of over 13 million Nepali text sequences using a masked language modeling (MLM) objective. Our approach trains a Sentence Piece Model (SPM) for text tokenization similar to [XLM-ROBERTa](https://arxiv.org/abs/1911.02116) and trains [DeBERTa](https://arxiv.org/abs/2006.03654) for language modeling. Find more details in [this paper](https://aclanthology.org/2022.sigul-1.14/).
|
27 |
|
28 |
It achieves the following results on the evaluation set:
|
29 |
|