julien-c HF staff commited on
Commit
900e1c9
1 Parent(s): bc60bd9

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/dbmdz/electra-base-italian-xxl-cased-generator/README.md

Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: it
3
+ license: mit
4
+ datasets:
5
+ - wikipedia
6
+ ---
7
+
8
+ # 🤗 + 📚 dbmdz BERT and ELECTRA models
9
+
10
+ In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State
11
+ Library open sources Italian BERT and ELECTRA models 🎉
12
+
13
+ # Italian BERT
14
+
15
+ The source data for the Italian BERT model consists of a recent Wikipedia dump and
16
+ various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final
17
+ training corpus has a size of 13GB and 2,050,057,573 tokens.
18
+
19
+ For sentence splitting, we use NLTK (faster compared to spacy).
20
+ Our cased and uncased models are training with an initial sequence length of 512
21
+ subwords for ~2-3M steps.
22
+
23
+ For the XXL Italian models, we use the same training data from OPUS and extend
24
+ it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/).
25
+ Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
26
+
27
+ Note: Unfortunately, a wrong vocab size was used when training the XXL models.
28
+ This explains the mismatch of the "real" vocab size of 31102, compared to the
29
+ vocab size specified in `config.json`. However, the model is working and all
30
+ evaluations were done under those circumstances.
31
+ See [this issue](https://github.com/dbmdz/berts/issues/7) for more information.
32
+
33
+ The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch
34
+ size of 128. We pretty much following the ELECTRA training procedure as used for
35
+ [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra).
36
+
37
+ ## Model weights
38
+
39
+ Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers)
40
+ compatible weights are available. If you need access to TensorFlow checkpoints,
41
+ please raise an issue!
42
+
43
+ | Model | Downloads
44
+ | ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------
45
+ | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt)
46
+ | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt)
47
+ | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt)
48
+ | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt)
49
+ | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt)
50
+ | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt)
51
+
52
+ ## Results
53
+
54
+ For results on downstream tasks like NER or PoS tagging, please refer to
55
+ [this repository](https://github.com/stefan-it/italian-bertelectra).
56
+
57
+ ## Usage
58
+
59
+ With Transformers >= 2.3 our Italian BERT models can be loaded like:
60
+
61
+ ```python
62
+ from transformers import AutoModel, AutoTokenizer
63
+
64
+ model_name = "dbmdz/bert-base-italian-cased"
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+
68
+ model = AutoModel.from_pretrained(model_name)
69
+ ```
70
+
71
+ To load the (recommended) Italian XXL BERT models, just use:
72
+
73
+ ```python
74
+ from transformers import AutoModel, AutoTokenizer
75
+
76
+ model_name = "dbmdz/bert-base-italian-xxl-cased"
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
79
+
80
+ model = AutoModel.from_pretrained(model_name)
81
+ ```
82
+
83
+ To load the Italian XXL ELECTRA model (discriminator), just use:
84
+
85
+ ```python
86
+ from transformers import AutoModel, AutoTokenizer
87
+
88
+ model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator"
89
+
90
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
91
+
92
+ model = AutoModelWithLMHead.from_pretrained(model_name)
93
+ ```
94
+
95
+ # Huggingface model hub
96
+
97
+ All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz).
98
+
99
+ # Contact (Bugs, Feedback, Contribution and more)
100
+
101
+ For questions about our BERT/ELECTRA models just open an issue
102
+ [here](https://github.com/dbmdz/berts/issues/new) 🤗
103
+
104
+ # Acknowledgments
105
+
106
+ Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
107
+ Thanks for providing access to the TFRC ❤️
108
+
109
+ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team,
110
+ it is possible to download both cased and uncased models from their S3 storage 🤗