Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,5 @@
|
|
1 |
# ELECTRA
|
2 |
## Introduction
|
3 |
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
|
4 |
-
|
5 |
-
Electra-base-vn is trained on 137GB vietnamese text from CC-100 corpus, 30GB from vietnamese news dataset and 2GB vietnamese wikipedia dataset with max sentence length = 512
|
6 |
### Contact information
|
7 |
For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
|
|
|
1 |
# ELECTRA
|
2 |
## Introduction
|
3 |
**ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
|
|
|
|
|
4 |
### Contact information
|
5 |
For personal communication related to this project, please contact Nha Nguyen Van (nha282@gmail.com).
|