|
--- |
|
language: |
|
- zh |
|
license: apache-2.0 |
|
|
|
tags: |
|
- bert |
|
- NLU |
|
- NLI |
|
|
|
inference: true |
|
|
|
widget: |
|
- text: "今天心情不好[SEP]今天很开心" |
|
|
|
--- |
|
# Erlangshen-MegatronBert-1.3B-NLI, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). |
|
We collect 4 NLI(Natural Language Inference) datasets in the Chinese domain for finetune, with a total of 1014787 samples. Our model is mainly based on [roberta](https://huggingface.co/hfl/chinese-roberta-wwm-ext) |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoModelForSequenceClassification |
|
from transformers import BertTokenizer |
|
import torch |
|
|
|
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI') |
|
model=AutoModelForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-NLI') |
|
|
|
texta='今天的饭不好吃' |
|
textb='今天心情不好' |
|
|
|
output=model(torch.tensor([tokenizer.encode(texta,textb)])) |
|
print(torch.nn.functional.softmax(output.logits,dim=-1)) |
|
|
|
``` |
|
## Scores on downstream chinese tasks (without any data augmentation) |
|
| Model | cmnli | ocnli | snli | |
|
| :--------: | :-----: | :----: | :-----: | |
|
| Erlangshen-Roberta-110M-NLI | 80.83 | 78.56 | 88.01 | |
|
| Erlangshen-Roberta-330M-NLI | 82.25 | 79.82 | 88 | |
|
| Erlangshen-MegatronBert-1.3B-NLI | 84.52 | 84.17 | 88.67 | |
|
|
|
## Citation |
|
If you find the resource is useful, please cite the following website in your paper. |
|
``` |
|
@misc{Fengshenbang-LM, |
|
title={Fengshenbang-LM}, |
|
author={IDEA-CCNL}, |
|
year={2021}, |
|
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, |
|
} |
|
``` |