|
--- |
|
language: zh |
|
tags: |
|
- cross-encoder |
|
datasets: |
|
- dialogue |
|
--- |
|
|
|
# Data |
|
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs. |
|
|
|
## Model |
|
model created by [sentence-tansformers](https://www.sbert.net/index.html),model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext. |
|
This model structure is as same as [tuhailong/cross_encoder_roberta-wwm-ext_v1](https://huggingface.co/tuhailong/cross_encoder_roberta-wwm-ext_v1),the difference is changing the epoch from 5 to 1, the performance is better in my dataset. |
|
|
|
### Usage |
|
```python |
|
>>> from sentence_transformers.cross_encoder import CrossEncoder |
|
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64) |
|
>>> sentences = ["今天天气不错", "今天心情不错"] |
|
>>> score = model.predict([sentences]) |
|
>>> print(score[0]) |
|
``` |
|
|
|
#### Code |
|
train code from https://github.com/TTurn/cross-encoder |