Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ license: mit
|
|
13 |
|
14 |
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
|
15 |
|
16 |
-
In DeBERTa V3, we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
|
17 |
|
18 |
The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024 . Its total parameter number is 418M since we use a vocabulary containing 128K tokens which introduce 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
|
19 |
|
|
|
13 |
|
14 |
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
|
15 |
|
16 |
+
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
|
17 |
|
18 |
The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024 . Its total parameter number is 418M since we use a vocabulary containing 128K tokens which introduce 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
|
19 |
|