Joelzhang commited on
Commit
2438b68
1 Parent(s): 7323828

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -14
README.md CHANGED
@@ -7,18 +7,38 @@ tags:
7
  - bert
8
  - NLU
9
  - FewCLUE
 
10
 
11
  inference: true
12
 
13
  ---
14
- # Erlangshen-MegatronBert-1.3B model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
15
- Encoder structure-based Bidirection language model, focusing on solving various natural language understanding tasks. The 1.3 billion parameter Erlangshen-MegatronBert-1.3B large model, using 280G Chinese data, 32 A100 training for 14 days, is the largest open source Chinese Bert large model. On November 10, 2021, **it reached the top of the [FewCLUE](https://www.cluebenchmarks.com/fewclue.html)** list of the authoritative benchmark for Chinese language understanding.
 
 
16
 
17
- [IDEA研究院中文预训练模型二郎神登顶FewCLUE榜单](https://mp.weixin.qq.com/s/bA_9n_TlBE9P-UzCn7mKoA)
 
 
 
18
 
19
- Among them, **CHID (Idiom Fill in the Blank) and TNEWS (News Classification) surpass human beings, CHID (Idiom Fill in the Blank), CSLDCP (Subject Document Classification), OCNLI (Natural Language Reasoning) single task first, refreshing few-shot learning records**. The Erlangshen series will continue to be optimized in terms of model scale, knowledge integration, and supervision task assistance.
 
 
20
 
21
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```python
23
  from transformers import MegatronBertConfig, MegatronBertModel
24
  from transformers import BertTokenizer
@@ -26,21 +46,25 @@ from transformers import BertTokenizer
26
  tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
27
  config = MegatronBertConfig.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
28
  model = MegatronBertModel.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
29
-
30
  ```
31
- ## Scores on downstream chinese tasks (without any data augmentation)
 
32
  | Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
33
  | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
34
  | roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
35
  | Erlangshen-MegatronBert-1.3B | 0.7608 | 0.5996 | 0.6234 | 0.7917 | 0.81 | 0.9243 | 0.872 |
36
 
37
- ## Citation
38
- If you find the resource is useful, please cite the following website in your paper.
 
 
 
39
  ```
40
- @misc{Fengshenbang-LM,
41
- title={Fengshenbang-LM},
42
- author={IDEA-CCNL},
43
- year={2021},
44
- howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
 
45
  }
46
  ```
 
7
  - bert
8
  - NLU
9
  - FewCLUE
10
+ - ZeroCLUE
11
 
12
  inference: true
13
 
14
  ---
15
+ # Erlangshen-MegatronBert-1.3B
16
+ - 2021登顶FewCLUE和ZeroCLUE,处理NLU任务,开源时最大的中文BERT模型
17
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
18
+ - 使用文档:[Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
19
 
20
+ ## 模型分类学 Model Taxonomy
21
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
22
+ | ---- | ---- | ---- | ---- | ---- | ---- |
23
+ | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBert | 1.3B | - |
24
 
25
+ ## 模型信息 Model Information
26
+ Encoder结构为主的双向语言模型,专注于解决各种自然语言理解任务。
27
+ 我们跟进了[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)的工作,使用了32张A100,总共耗时14天在悟道语料库(180 GB版本)上训练了十亿级别参数量的BERT。同时,鉴于中文语法和大规模训练的难度,我们使用四种预训练策略来改进BERT:1) 整词掩码, 2) 知动态遮掩, 3) 句子顺序预测, 4) 层前归一化.
28
 
29
+ A bidirectional language model based on the Encoder structure, focusing on solving various NLU tasks.
30
+ We follow [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), using 32 A100s and spending 14 days training a billion-level BERT on WuDao Corpora (180 GB version). Given Chinese grammar and the difficulty of large-scale training, we use four pre-training procedures to improve BERT: 1) Whole Word Masking (WWM), 2) Knowledge-based Dynamic Masking (KDM), 3) Sentence Order Prediction (SOP), 4) Pre-layer Normalization (Pre-LN).
31
+
32
+ ## 成就 Achievement
33
+ 1.2021年11月10日,二郎神在FewCLUE上取得第一。其中,它在CHIDF(成语填空)和TNEWS(新闻分类)子任务中的表现优于人类表现。此外,它在CHIDF(成语填空), CSLDCP(学科文献分类), OCNLI(自然语言推理)任 务中均名列前茅。
34
+ 2.2022年1月24日,二郎神在CLUE基准测试中的ZeroCLUE中取得第一。具体到子任务,我们在CSLDCP(主题文献分类), TNEWS(新闻分类), IFLYTEK(应用描述分类), CSL(抽象关键字识别)和CLUEWSC(参考消歧)任务中取得第一。
35
+ 3.在2022年7月10日,我们在CLUE基准的语义匹配任务中取得第一。
36
+
37
+ 1.On November 10, 2021, Erlangshen topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
38
+ 2.On January 24, 2022, Erlangshen-MRC topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks.
39
+ 3.Erlangshen topped the CLUE benchmark semantic matching task on July 10, 2022.
40
+
41
+ ## 使用 Usage
42
  ```python
43
  from transformers import MegatronBertConfig, MegatronBertModel
44
  from transformers import BertTokenizer
 
46
  tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
47
  config = MegatronBertConfig.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
48
  model = MegatronBertModel.from_pretrained("IDEA-CCNL/Erlangshen-MegatronBert-1.3B")
 
49
  ```
50
+
51
+ ## 下游任务表现 Performance
52
  | Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
53
  | :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
54
  | roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.777 | 0.814 | 0.8914 | 0.86 |
55
  | Erlangshen-MegatronBert-1.3B | 0.7608 | 0.5996 | 0.6234 | 0.7917 | 0.81 | 0.9243 | 0.872 |
56
 
57
+
58
+ ## 引用 Citation
59
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
60
+
61
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
62
  ```
63
+ @article{fengshenbang,
64
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
65
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
66
+ journal = {CoRR},
67
+ volume = {abs/2209.02970},
68
+ year = {2022}
69
  }
70
  ```