Fill-Mask
Transformers
PyTorch
Chinese
bert
Inference Endpoints
wangyulong commited on
Commit
f97122a
1 Parent(s): 770f5d5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ license: apache-2.0
5
+ ---
6
+
7
+
8
+ # Mengzi-BERT base model (Chinese)
9
+
10
+ Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
11
+
12
+ [Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](www.example.com)
13
+
14
+ ## Usage
15
+
16
+ ```python
17
+ from transformers import BertTokenizer, BertModel
18
+ tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
19
+ model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
20
+ ```
21
+
22
+ ## Scores on nine chinese tasks (without any data augmentation)
23
+ |Model|AFQMC|TNEWS|IFLYTEK|CMNLI|WSC|CSL|CMRC|C3|CHID|
24
+ |-|-|-|-|-|-|-|-|-|-|
25
+ |CLUE RoBERTa-wwm-ext Baseline|74.04|56.94|60.31|80.51|67.80|81|75.20|66.5|83.62|
26
+ |Mengzi-BERT-base|74.58|57.97|60.68|82.12|87.50|85.4|78.54|71.7|0|
27
+
28
+ ## Citation
29
+ If you find the technical report or resource is useful, please cite the following technical report in your paper.
30
+ ```
31
+ example
32
+ ```