File size: 3,330 Bytes
68fbe9c 6083b5f 68fbe9c 6083b5f c4d588e 9635637 6083b5f 9635637 6083b5f 9635637 6083b5f 9635637 6083b5f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
license: afl-3.0
language:
- zh
tags:
- Chinese Spell Correction
- csc
- Chinese Spell Checking
---
# ReaLiSe-for-csc
中文拼写纠错(Chinese Spell Checking, CSC)模型
该模型源于ReaLiSe源码提供的模型
原论文为:https://arxiv.org/abs/2105.12306
原论文官方代码为:https://github.com/DaDaMrX/ReaLiSe
本模型在SIGHAN2015上的表现如下:
| | Detect-Acc | Detect-Precision | Detect-Recall | Detect-F1 | Correct-Acc | Correct-Precision | Correct-Recall | Correct-F1 |
|--|--|--|--|--|--|--|--|--|
| Sentence-level | 84.7 | 77.3 | 81.3 | 79.3 | 84.0 | 75.9 | 79.9 | 77.8 |
# 模型使用方法
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/iioSnail/ReaLiSe/blob/master/ReaLiSe_for_csc_Demo.ipynb)
安装依赖:
```
!pip install transformers
!pip install pypinyin
!pip install boto3
```
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
inputs = tokenizer(["我是炼习时长两念半的个人练习生蔡徐坤"], return_tensors='pt')
output_hidden = model(**inputs).logits
print(''.join(tokenizer.convert_ids_to_tokens(output_hidden.argmax(-1)[0, 1:-1])))
```
输出:
```
我是练习时长两年半的个人练习生蔡徐坤
```
你也可以使用本模型封装的`predict`方法。
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model.set_tokenizer(tokenizer) # 使用predict方法前,调用该方法
print(model.predict("我是练习时长两念半的鸽仁练习生蔡徐坤"))
print(model.predict(["我是练习时长两念半的鸽仁练习生蔡徐坤", "喜换唱跳、rap 和 蓝球"]))
```
输出:
```
我是练习时长两年半的各仁练习生蔡徐坤
['我是练习时长两年半的各仁练习生蔡徐坤', '喜欢唱跳、rap 和 蓝球']
```
# 模型训练
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
model = AutoModel.from_pretrained("iioSnail/ReaLiSe-for-csc", trust_remote_code=True)
inputs = tokenizer(["我是炼习时长两念半的个人练习生蔡徐坤", "喜换唱跳rap蓝球"],
text_target=["我是练习时长两年半的个人练习生蔡徐坤", "喜欢唱跳rap篮球"],
padding=True,
return_tensors='pt')
loss = model(**inputs).loss
print("loss:", loss)
loss.backward()
```
输出:
```
loss: tensor(0.6515, grad_fn=<NllLossBackward0>)
```
# 常见问题
1. 网络问题,例如:`Connection Error`
解决方案:将模型下载到本地使用。批量下载方案可参考该[博客](https://blog.csdn.net/zhaohongfei_358/article/details/126222999)
2. 将模型下载到本地使用时出现报错:`ModuleNotFoundError: No module named 'transformers_modules.iioSnail/ReaLiSe-for-csc'`
解决方案:将 `iioSnail/ChineseBERT-for-csc` 改为 `iioSnail\ChineseBERT-for-csc`,或升级transformers |