Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ This is the fine-tuned version of the Chinese BERT model on several NLI datasets
|
|
35 |
|
36 |
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的4个用于finetune的中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。
|
37 |
|
38 |
-
Based on [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned an NLI version on 4 Natural Language Inference (NLI) datasets that are commonly used for fine-tuning
|
39 |
|
40 |
### 下游效果 Performance
|
41 |
|
|
|
35 |
|
36 |
基于[Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B),我们在收集的4个用于finetune的中文领域的NLI(自然语言推理)数据集,总计1014787个样本上微调了一个NLI版本。
|
37 |
|
38 |
+
Based on [Erlangshen-MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B), we fine-tuned an NLI version on 4 Chinese Natural Language Inference (NLI) datasets that are commonly used for fine-tuning, with totaling 1,014,787 samples.
|
39 |
|
40 |
### 下游效果 Performance
|
41 |
|