ReaLiSe-for-csc / README.md
iioSnail's picture
Update README.md
9635637
|
raw
history blame
3.33 kB
---
license: afl-3.0
language:
- zh
tags:
- Chinese Spell Correction
- csc
- Chinese Spell Checking
---
# ReaLiSe-for-csc
中文拼写纠错(Chinese Spell Checking, CSC)模型
该模型源于ReaLiSe源码提供的模型
原论文为:https://arxiv.org/abs/2105.12306
原论文官方代码为:https://github.com/DaDaMrX/ReaLiSe
本模型在SIGHAN2015上的表现如下:
| | Detect-Acc | Detect-Precision | Detect-Recall | Detect-F1 | Correct-Acc | Correct-Precision | Correct-Recall | Correct-F1 |
|--|--|--|--|--|--|--|--|--|
| Sentence-level | 84.7 | 77.3 | 81.3 | 79.3 | 84.0 | 75.9 | 79.9 | 77.8 |
# 模型使用方法
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iioSnail/SCOPE/blob/main/ChineseBERT-for-csc_Demo.ipynb)
安装依赖:
```
!pip install transformers
!pip install pypinyin
!pip install boto3
```
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
inputs = tokenizer(["我是炼习时长两念半的个人练习生蔡徐坤"], return_tensors='pt')
output_hidden = model(**inputs).logits
print(''.join(tokenizer.convert_ids_to_tokens(output_hidden.argmax(-1)[0, 1:-1])))
```
输出:
```
我是练习时长两年半的个人练习生蔡徐坤
```
你也可以使用本模型封装的`predict`方法。
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model.set_tokenizer(tokenizer) # 使用predict方法前,调用该方法
print(model.predict("我是练习时长两念半的鸽仁练习生蔡徐坤"))
print(model.predict(["我是练习时长两念半的鸽仁练习生蔡徐坤", "喜换唱跳、rap 和 蓝球"]))
```
输出:
```
我是练习时长两年半的各仁练习生蔡徐坤
['我是练习时长两年半的各仁练习生蔡徐坤', '喜欢唱跳、rap 和 蓝球']
```
# 模型训练
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
inputs = tokenizer(["我是炼习时长两念半的个人练习生蔡徐坤", "喜换唱跳rap蓝球"],
text_target=["我是练习时长两年半的个人练习生蔡徐坤", "喜欢唱跳rap篮球"],
padding=True,
return_tensors='pt')
loss = model(**inputs).loss
print("loss:", loss)
loss.backward()
```
输出:
```
loss: tensor(0.6515, grad_fn=<NllLossBackward0>)
```
# 常见问题
1. 网络问题,例如:`Connection Error`
解决方案:将模型下载到本地使用。批量下载方案可参考该[博客](https://blog.csdn.net/zhaohongfei_358/article/details/126222999)
2. 将模型下载到本地使用时出现报错:`ModuleNotFoundError: No module named 'transformers_modules.iioSnail/ReaLiSe-for-csc'`
解决方案:将 `iioSnail/ChineseBERT-for-csc` 改为 `iioSnail\ChineseBERT-for-csc`,或升级transformers