Fill-Mask
Transformers
PyTorch
Chinese
bert
Inference Endpoints
hjy commited on
Commit
b848b8c
1 Parent(s): 0e46ac3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -26,7 +26,11 @@ model = BertModel.from_pretrained("Langboat/mengzi-bert-base")
26
  |-|-|-|-|-|-|-|-|-|-|
27
  |RoBERTa-wwm-ext|74.04|56.94|60.31|80.51|67.80|81.00|75.20|66.50|83.62|
28
  |Mengzi-BERT-base|74.58|57.97|60.68|82.12|87.50|85.40|78.54|71.70|84.16|
 
 
 
29
  RoBERTa-wwm-ext scores are from CLUE baseline
 
30
 
31
  ## Citation
32
  If you find the technical report or resource is useful, please cite the following technical report in your paper.
 
26
  |-|-|-|-|-|-|-|-|-|-|
27
  |RoBERTa-wwm-ext|74.04|56.94|60.31|80.51|67.80|81.00|75.20|66.50|83.62|
28
  |Mengzi-BERT-base|74.58|57.97|60.68|82.12|87.50|85.40|78.54|71.70|84.16|
29
+
30
+
31
+ ```bash
32
  RoBERTa-wwm-ext scores are from CLUE baseline
33
+ ```
34
 
35
  ## Citation
36
  If you find the technical report or resource is useful, please cite the following technical report in your paper.