File size: 761 Bytes
c5af7a1 0cfc904 1d93277 0cfc904 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
metrics:
- perplexity
pipeline_tag: fill-mask
library_name: transformers
base_model:
- Jihuai/bert-ancient-chinese
---
Use the model
```python
from transformers import BertTokenizer, BertForMaskedLM
import torch
# Load the tokenizer
tokenizer = BertTokenizer.from_pretrained('btqkhai/SinoNomBERT')
# Load the model
model = BertForMaskedLM.from_pretrained('btqkhai/SinoNomBERT')
text = '大 [MASK] 百 官 其 𢮿 花 供 饌 皆 用 新 禮'
inputs = tokenizer(text, return_tensors="pt")
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
# Ground Truth: 宴
logits = model(**inputs).logits
mask_token_logits = logits[0, mask_token_index, :]
print("Predicted word:", tokenizer.decode(mask_token_logits[0].argmax()))
``` |