Warn about config mismatch for pre-training
Browse files
README.md
CHANGED
@@ -5,6 +5,8 @@ thumbnail: https://huggingface.co/front/thumbnails/google.png
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
|
|
|
|
8 |
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
|
9 |
|
10 |
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
|
|
|
5 |
license: apache-2.0
|
6 |
---
|
7 |
|
8 |
+
**WARNING**: This model is not scaled properly for pre-training with [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator). The ratio of numbers of parameters is 1:1 instead of 1:4. Pre-training using this config off the shelf will result in training instability and collapse of the discriminator loss.
|
9 |
+
|
10 |
## ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
|
11 |
|
12 |
**ELECTRA** is a new method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset.
|