benjamin commited on
Commit
a7cf889
·
1 Parent(s): 0d12f7e

add README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: de
3
+
4
+ widget:
5
+ - text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."
6
+
7
+ license: mit
8
+ ---
9
+
10
+ # GerPT2
11
+
12
+ A small German GPT2.
13
+
14
+ See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
15
+
16
+ ## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
17
+
18
+ I evaluated both GerPT2 and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
19
+
20
+ | | CC-100 (PPL) | Wikipedia (PPL) |
21
+ |-------------------|--------------|-----------------|
22
+ | dbmdz/german-gpt2 | 49.47 | 62.92 |
23
+ | GerPT2 | __24.78__ | __35.33__ |
24
+ | | | |
25
+
26
+ See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
27
+
28
+ ## Usage
29
+
30
+ ![GerPT2 usage](https://user-images.githubusercontent.com/13353204/100330362-288d0b80-2fcf-11eb-82e7-ed71c7140a88.png)
31
+
32
+ Also, two tricks might improve the generated text:
33
+
34
+ ```python
35
+ output = model.generate(
36
+ # during training an EOS token was used to mark the beginning of each text
37
+ # so it can help to insert it at the start
38
+ torch.tensor(
39
+ [tokenizer.eos_token_id] + tokenizer.encode(prompt)
40
+ ).unsqueeze(0),
41
+ do_sample=True,
42
+ # try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
43
+ # prone to ending generation early because a significant number of texts from the training corpus
44
+ # is quite short
45
+ bad_words_ids=[[0]],
46
+ max_length=max_length,
47
+ )[0]
48
+ print(tokenizer.decode(output))
49
+ ```
50
+
51
+ ## Training details
52
+
53
+ GerPT2 is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2).
54
+ GerPT2 was trained with:
55
+
56
+ - a batch size of 256
57
+ - using OneCycle learning rate with a maximum of 5e-3
58
+ - with AdamW with a weight decay of 0.01
59
+ - for 7 epochs
60
+
61
+ Training took roughly 6 days on 8 TPUv3 cores.
62
+
63
+ To train GerPT2, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
64
+
65
+ 0. Download and unzip training data from http://data.statmt.org/cc-100/.
66
+ 1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
67
+ 2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
68
+
69
+ ```
70
+ ĠMinde -> Ġleast
71
+ Ġjed -> Ġwhatsoever
72
+ flughafen -> Air
73
+ vermittlung -> employment
74
+ teilung -> ignment
75
+ ĠInterpretation -> Ġinterpretation
76
+ Ġimport -> Ġimported
77
+ hansa -> irl
78
+ genehmigungen -> exempt
79
+ ĠAuflist -> Ġlists
80
+ Ġverschwunden -> Ġdisappeared
81
+ ĠFlyers -> ĠFlyers
82
+ Kanal -> Channel
83
+ Ġlehr -> Ġteachers
84
+ Ġnahelie -> Ġconvenient
85
+ gener -> Generally
86
+ mitarbeiter -> staff
87
+ ```
88
+
89
+ This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
90
+
91
+ 3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
92
+ 4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu.json`.
93
+
94
+ ## License
95
+
96
+ GerPT2 is licensed under the MIT License.
97
+
98
+ ## Acknowledgements
99
+
100
+ Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
101
+ Special thanks to [PetFinder.my](https://www.petfinder.my/) for generously sponsoring the resources used for training.