bayartsogt
commited on
Commit
•
2cc4087
1
Parent(s):
ec62a6f
Update README.md
Browse files
README.md
CHANGED
@@ -1,16 +1,23 @@
|
|
1 |
-
# StructBERT:
|
2 |
|
3 |
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
|
4 |
|
|
|
|
|
|
|
5 |
## Reproduce HFHub models:
|
6 |
-
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
9 |
|
|
|
10 |
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
|
11 |
|
12 |
-
config = AutoConfig.from_pretrained("./
|
13 |
-
model = AutoModelForMaskedLM.from_pretrained("
|
14 |
tokenizer = AutoTokenizer.from_pretrained(".", config=config)
|
15 |
|
16 |
model.push_to_hub("structbert-large")
|
@@ -19,6 +26,7 @@ tokenizer.push_to_hub("structbert-large")
|
|
19 |
|
20 |
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
|
21 |
|
|
|
22 |
## Introduction
|
23 |
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
|
24 |
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
|
|
|
1 |
+
# StructBERT: Un-Official Copy
|
2 |
|
3 |
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
|
4 |
|
5 |
+
**Claimer**
|
6 |
+
* This model card is not produced by [AliceMind Team](https://github.com/alibaba/AliceMind/)
|
7 |
+
|
8 |
## Reproduce HFHub models:
|
9 |
+
Download model/tokenizer vocab
|
10 |
+
```bash
|
11 |
+
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/large_bert_config.json && mv large_bert_config.json config.json
|
12 |
+
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/vocab.txt
|
13 |
+
wget https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model && mv en_model pytorch_model.bin
|
14 |
+
```
|
15 |
|
16 |
+
```python
|
17 |
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
|
18 |
|
19 |
+
config = AutoConfig.from_pretrained("./config.json")
|
20 |
+
model = AutoModelForMaskedLM.from_pretrained(".", config=config)
|
21 |
tokenizer = AutoTokenizer.from_pretrained(".", config=config)
|
22 |
|
23 |
model.push_to_hub("structbert-large")
|
|
|
26 |
|
27 |
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
|
28 |
|
29 |
+
# StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
|
30 |
## Introduction
|
31 |
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
|
32 |
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
|