File size: 1,720 Bytes
dea46fb c450c68 58aafbd 0b2fa43 dea46fb c450c68 330b53d 29e8295 330b53d 9fdf052 330b53d 424901b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: mit
datasets:
- Helsinki-NLP/tatoeba_mt
language:
- ja
- ko
pipeline_tag: translation
tags:
- python
- transformer
- pytorch
---
# Japanese to Korean translator for FFXIV
**FINAL FANTASY is a registered trademark of Square Enix Holdings Co., Ltd.**
This project is detailed on the [Github repo](https://github.com/sappho192/ffxiv-ja-ko-translator).
# Demo
[![demo.gif](demo.gif)](https://huggingface.co/spaces/sappho192/ffxiv-ja-ko-translator-demo)
[Click to try demo](https://huggingface.co/spaces/sappho192/ffxiv-ja-ko-translator-demo)
# Usage
Check the [test_eval.ipynb](https://huggingface.co/sappho192/ffxiv-ja-ko-translator/blob/main/test_eval.ipynb) or below section.
## Inference
```Python
from transformers import(
EncoderDecoderModel,
PreTrainedTokenizerFast,
BertJapaneseTokenizer,
)
import torch
encoder_model_name = "cl-tohoku/bert-base-japanese-v2"
decoder_model_name = "skt/kogpt2-base-v2"
src_tokenizer = BertJapaneseTokenizer.from_pretrained(encoder_model_name)
trg_tokenizer = PreTrainedTokenizerFast.from_pretrained(decoder_model_name)
# You should change following `./best_model` to the path of model **directory**
model = EncoderDecoderModel.from_pretrained("./best_model")
text = "ギルガメッシュ討伐戦"
# text = "ギルガメッシュ討伐戦に行ってきます。一緒に行きましょうか?"
def translate(text_src):
embeddings = src_tokenizer(text_src, return_attention_mask=False, return_token_type_ids=False, return_tensors='pt')
embeddings = {k: v for k, v in embeddings.items()}
output = model.generate(**embeddings)[0, 1:-1]
text_trg = trg_tokenizer.decode(output.cpu())
return text_trg
print(translate(text))
``` |