Edit model card

MarianMT exported to the ONNX format

Install Optimum

pip install optimum

Usage example

from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("icon-it-tdtu/mt-vi-en-optimum")
model = ORTModelForSeq2SeqLM.from_pretrained("icon-it-tdtu/mt-vi-en-optimum")

text = "Tôi là một sinh viên."
inputs = tokenizer(text, return_tensors='pt')
outputs = model.generate(**inputs)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
# I am a student.
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using icon-it-tdtu/mt-vi-en-optimum 1