Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,234 @@
|
|
1 |
---
|
|
|
|
|
|
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: en
|
3 |
+
tags:
|
4 |
+
- exbert
|
5 |
license: apache-2.0
|
6 |
+
datasets:
|
7 |
+
- bookcorpus
|
8 |
+
- wikipedia
|
9 |
---
|
10 |
+
|
11 |
+
# BERT base model (uncased)
|
12 |
+
|
13 |
+
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
|
14 |
+
[this paper](https://arxiv.org/abs/1810.04805) and first released in
|
15 |
+
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
|
16 |
+
between english and English.
|
17 |
+
|
18 |
+
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
|
19 |
+
the Hugging Face team.
|
20 |
+
|
21 |
+
## Model description
|
22 |
+
|
23 |
+
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
|
24 |
+
was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
|
25 |
+
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
|
26 |
+
was pretrained with two objectives:
|
27 |
+
|
28 |
+
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
|
29 |
+
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
|
30 |
+
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
|
31 |
+
GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the
|
32 |
+
sentence.
|
33 |
+
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
|
34 |
+
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
|
35 |
+
predict if the two sentences were following each other or not.
|
36 |
+
|
37 |
+
This way, the model learns an inner representation of the English language that can then be used to extract features
|
38 |
+
useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
|
39 |
+
classifier using the features produced by the BERT model as inputs.
|
40 |
+
|
41 |
+
## Model variations
|
42 |
+
|
43 |
+
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out an accent markers.
|
44 |
+
Chinese and multilingual uncased and cased versions followed shortly after.
|
45 |
+
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
|
46 |
+
Other 24 smaller models are released afterward.
|
47 |
+
|
48 |
+
The detailed release history can be found on the [google-research/bert readme](https://github.com/google-research/bert/blob/master/README.md) on github.
|
49 |
+
|
50 |
+
| Model | #params | Language |
|
51 |
+
|------------------------|--------------------------------|-------|
|
52 |
+
| [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) | 110M | English |
|
53 |
+
| [`bert-large-uncased`](https://huggingface.co/bert-large-uncased) | 340M | English | sub
|
54 |
+
| [`bert-base-cased`](https://huggingface.co/bert-base-cased) | 110M | English |
|
55 |
+
| [`bert-large-cased`](https://huggingface.co/bert-large-cased) | 340M | English |
|
56 |
+
| [`bert-base-chinese`](https://huggingface.co/bert-base-chinese) | 110M | Chinese |
|
57 |
+
| [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) | 110M | Multiple |
|
58 |
+
| [`bert-large-uncased-whole-word-masking`](https://huggingface.co/bert-large-uncased-whole-word-masking) | 340M | English |
|
59 |
+
| [`bert-large-cased-whole-word-masking`](https://huggingface.co/bert-large-cased-whole-word-masking) | 340M | English |
|
60 |
+
|
61 |
+
## Intended uses & limitations
|
62 |
+
|
63 |
+
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
|
64 |
+
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
|
65 |
+
fine-tuned versions of a task that interests you.
|
66 |
+
|
67 |
+
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
|
68 |
+
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
|
69 |
+
generation you should look at model like GPT2.
|
70 |
+
|
71 |
+
### How to use
|
72 |
+
|
73 |
+
You can use this model directly with a pipeline for masked language modeling from the [Optimum library](https://huggingface.co/docs/optimum/main/en/index):
|
74 |
+
|
75 |
+
```python
|
76 |
+
>>> from optimum.pipelines import pipeline
|
77 |
+
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
|
78 |
+
>>> unmasker("The capital of France is [MASK].")
|
79 |
+
|
80 |
+
[
|
81 |
+
{'score': 0.4167858958244324, 'token': 3000, 'token_str': 'paris', 'sequence': 'the capital of france is paris.'},
|
82 |
+
{'score': 0.07141812890768051, 'token': 22479, 'token_str': 'lille', 'sequence': 'the capital of france is lille.'},
|
83 |
+
{'score': 0.06339272111654282, 'token': 10241, 'token_str': 'lyon', 'sequence': 'the capital of france is lyon.'},
|
84 |
+
{'score': 0.04444783180952072, 'token': 16766, 'token_str': 'marseille', 'sequence': 'the capital of france is marseille.'},
|
85 |
+
{'score': 0.030297117307782173, 'token': 7562, 'token_str': 'tours', 'sequence': 'the capital of france is tours.'}
|
86 |
+
]
|
87 |
+
```
|
88 |
+
|
89 |
+
Here is how to use this model to fill the masked token with ONNX Runtime backend:
|
90 |
+
|
91 |
+
```python
|
92 |
+
from transformers import AutoTokenizer
|
93 |
+
from optimum.onnxruntime import ORTModelForMaskedLM
|
94 |
+
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
|
95 |
+
|
96 |
+
model = ORTModelForMaskedLM.from_pretrained("bert-base-uncased", from_transformers=True)
|
97 |
+
text = "The capital of France is [MASK]."
|
98 |
+
inputs = tokenizer(text, return_tensors="pt")
|
99 |
+
logits = model(**inputs)
|
100 |
+
|
101 |
+
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
|
102 |
+
predicted_token_id = logits[0, mask_token_index].argmax(axis=-1)
|
103 |
+
|
104 |
+
tokenizer.decode(predicted_token_id)
|
105 |
+
```
|
106 |
+
|
107 |
+
### Limitations and bias
|
108 |
+
|
109 |
+
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
|
110 |
+
predictions:
|
111 |
+
|
112 |
+
```python
|
113 |
+
>>> from optimum.pipelines import pipeline
|
114 |
+
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased', accelerator="ort")
|
115 |
+
>>> unmasker("The man worked as a [MASK].")
|
116 |
+
|
117 |
+
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
|
118 |
+
'score': 0.09747550636529922,
|
119 |
+
'token': 10533,
|
120 |
+
'token_str': 'carpenter'},
|
121 |
+
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
|
122 |
+
'score': 0.0523831807076931,
|
123 |
+
'token': 15610,
|
124 |
+
'token_str': 'waiter'},
|
125 |
+
{'sequence': '[CLS] the man worked as a barber. [SEP]',
|
126 |
+
'score': 0.04962705448269844,
|
127 |
+
'token': 13362,
|
128 |
+
'token_str': 'barber'},
|
129 |
+
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
|
130 |
+
'score': 0.03788609802722931,
|
131 |
+
'token': 15893,
|
132 |
+
'token_str': 'mechanic'},
|
133 |
+
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
|
134 |
+
'score': 0.037680890411138535,
|
135 |
+
'token': 18968,
|
136 |
+
'token_str': 'salesman'}]
|
137 |
+
|
138 |
+
>>> unmasker("The woman worked as a [MASK].")
|
139 |
+
|
140 |
+
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
|
141 |
+
'score': 0.21981462836265564,
|
142 |
+
'token': 6821,
|
143 |
+
'token_str': 'nurse'},
|
144 |
+
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
|
145 |
+
'score': 0.1597415804862976,
|
146 |
+
'token': 13877,
|
147 |
+
'token_str': 'waitress'},
|
148 |
+
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
|
149 |
+
'score': 0.1154729500412941,
|
150 |
+
'token': 10850,
|
151 |
+
'token_str': 'maid'},
|
152 |
+
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
|
153 |
+
'score': 0.037968918681144714,
|
154 |
+
'token': 19215,
|
155 |
+
'token_str': 'prostitute'},
|
156 |
+
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
|
157 |
+
'score': 0.03042375110089779,
|
158 |
+
'token': 5660,
|
159 |
+
'token_str': 'cook'}]
|
160 |
+
```
|
161 |
+
|
162 |
+
This bias will also affect all fine-tuned versions of this model.
|
163 |
+
|
164 |
+
## Training data
|
165 |
+
|
166 |
+
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
|
167 |
+
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
|
168 |
+
headers).
|
169 |
+
|
170 |
+
## Training procedure
|
171 |
+
|
172 |
+
### Preprocessing
|
173 |
+
|
174 |
+
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
|
175 |
+
then of the form:
|
176 |
+
|
177 |
+
```
|
178 |
+
[CLS] Sentence A [SEP] Sentence B [SEP]
|
179 |
+
```
|
180 |
+
|
181 |
+
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in
|
182 |
+
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
|
183 |
+
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
|
184 |
+
"sentences" has a combined length of less than 512 tokens.
|
185 |
+
|
186 |
+
The details of the masking procedure for each sentence are the following:
|
187 |
+
- 15% of the tokens are masked.
|
188 |
+
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
|
189 |
+
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
|
190 |
+
- In the 10% remaining cases, the masked tokens are left as is.
|
191 |
+
|
192 |
+
### Pretraining
|
193 |
+
|
194 |
+
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
|
195 |
+
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
|
196 |
+
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
|
197 |
+
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
|
198 |
+
|
199 |
+
## Evaluation results
|
200 |
+
|
201 |
+
When fine-tuned on downstream tasks, this model achieves the following results:
|
202 |
+
|
203 |
+
Glue test results:
|
204 |
+
|
205 |
+
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|
206 |
+
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
|
207 |
+
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
|
208 |
+
|
209 |
+
|
210 |
+
### BibTeX entry and citation info
|
211 |
+
|
212 |
+
```bibtex
|
213 |
+
@article{DBLP:journals/corr/abs-1810-04805,
|
214 |
+
author = {Jacob Devlin and
|
215 |
+
Ming{-}Wei Chang and
|
216 |
+
Kenton Lee and
|
217 |
+
Kristina Toutanova},
|
218 |
+
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
|
219 |
+
Understanding},
|
220 |
+
journal = {CoRR},
|
221 |
+
volume = {abs/1810.04805},
|
222 |
+
year = {2018},
|
223 |
+
url = {http://arxiv.org/abs/1810.04805},
|
224 |
+
archivePrefix = {arXiv},
|
225 |
+
eprint = {1810.04805},
|
226 |
+
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
|
227 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
|
228 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
229 |
+
}
|
230 |
+
```
|
231 |
+
|
232 |
+
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
|
233 |
+
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
|
234 |
+
</a>
|