File size: 1,958 Bytes
89390d1 dcc5339 89390d1 dcc5339 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
license: mit
language:
- en
- zh
library_name: transformers
tags:
- translation
- fine tune
- fine_tune
widget:
- text: >-
I {i}should{/i} say that I feel a little relieved to find out that
{i}this{/i} is why you’ve been hanging out with Kaori lately, though. She’s
really pretty and I got jealous and...I’m sorry.
---
# Normal1919/mbart-large-50-one-to-many-lil-fine-tune
* base model: mbart-large-50
* pretrained_ckpt: facebook/mbart-large-50-one-to-many-mmt
* This model was trained for [rpy dl translate](https://github.com/O5-7/rpy_dl_translate)
## Model description
* source group: English
* target group: Chinese
* model: transformer
* source language(s): eng
* target language(s): cjy_Hans cjy_Hant cmn cmn_Hans cmn_Hant gan lzh lzh_Hans nan wuu yue yue_Hans yue_Hant
* fine_tune: On the basis of mbart-large-50-one-to-many-mmt checkpoints, train English original text with renpy text features (including but not limited to {i} [text] {/i}) to Chinese with the same reserved flag, as well as training for English name retention for LIL
## How to use
```python
>>> from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
>>> mode_name = 'Normal1919/mbart-large-50-one-to-many-lil-fine-tune'
>>> model = MBartForConditionalGeneration.from_pretrained(mode_name)
>>> tokenizer = MBart50TokenizerFast.from_pretrained(mode_name, src_lang="en_XX", tgt_lang="zh_CN")
>>> translation = pipeline("Marian-NMT-en-zh-lil-fine-tune", model=model, tokenizer=tokenizer)
>>> translation('I {i} should {/i} say that I feel a little relieved to find out that {i}this {/i} is why you’ve been hanging out with Kaori lately, though. She’s really pretty and I got jealous and...I’m sorry', max_length=400)
[{'我{i}应该{/i}说发现{i}这{/i}是你最近和Kaori出去的原因,我有点松了一口气。她很漂亮,我嫉妒,而且......我很抱歉。'}]
```
## Contact
517205163@qq.com or
a4564563@gmail.com |