一、项目介绍
此项目是参考github上优秀的机器翻译项目mRASP,将官方开源的fairseq预训练权重改写为transformers架构,使其能够更加方便使用。
二、使用方法
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_path = 'ENLP/mrasp'
model = AutoModelForSeq2SeqLM.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, cache_dir=model_path)
input_text = ["Welcome to download and use!"]
inputs = tokenizer(input_text, return_tensors="pt", padding=True, max_length=300, truncation=True)
result = model.generate(**inputs)
result = tokenizer.batch_decode(result, skip_special_tokens=True)
result = [pre.strip() for pre in result]
# ['欢迎下载和使用!']
三、使用说明
该模型支持32种语言,更多详细参考mRASP,此模型库的tokenizer仅针对中英双语进行优化,如果需要使用其他语言请 自行参考tokenization_bat.py进行修改。
四、其他模型
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.