Foglietta-mt-en-it / README.md
Leonard Püttmann
Update README.md
15e7d72 verified
metadata
library_name: transformers
tags:
  - seq2seq
license: apache-2.0
datasets:
  - Helsinki-NLP/europarl
  - Helsinki-NLP/opus-100
language:
  - en
  - it
base_model:
  - google/t5-efficient-tiny
pipeline_tag: translation

🍃 Foglietta - A super tiny model for English -> Italian translation

Foglietta is an encoder-decoder transformer model for English-Italian text translation based on google/t5-efficient-tiny. It was trained on the en-it section of Helsinki-NLP/opus-100 and Helsinki-NLP/europarl.

Be advised: As the model is really small, it will make errors.

Usage

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

# Load model and tokenizer from checkpoint directory
tokenizer = AutoTokenizer.from_pretrained("LeonardPuettmann/Foglietta-mt-en-it")
model = AutoModelForSeq2SeqLM.from_pretrained("LeonardPuettmann/Foglietta-mt-en-it")

def generate_response(input_text):
    input_ids = tokenizer("translate English to Italian:" + input_text, return_tensors="pt").input_ids
    output = model.generate(input_ids, max_new_tokens=256)
    return tokenizer.decode(output[0], skip_special_tokens=True)

text_to_translate = "I would like a cup of green tea, please."
response = generate_response(text_to_translate)
print(response)

As this model is trained on translating sentence pairs, it is best to split longer text into individual sentences, ideally using SpaCy. You can then translate the sentences and join the translations at the end like this:

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import spacy
# First, install spaCy and the English language model if you haven't already
# !pip install spacy
# !python -m spacy download en_core_web_sm

nlp = spacy.load("en_core_web_sm")

tokenizer = AutoTokenizer.from_pretrained("LeonardPuettmann/Foglietta-mt-en-it")
model = AutoModelForSeq2SeqLM.from_pretrained("LeonardPuettmann/Foglietta-mt-en-it")

def generate_response(input_text):
    input_ids = tokenizer("translate English to Italian: " + input_text, return_tensors="pt").input_ids
    output = model.generate(input_ids, max_new_tokens=256)
    return tokenizer.decode(output[0], skip_special_tokens=True)

text = "How are you doing? Today is a beautiful day. I hope you are doing fine."
doc = nlp(text)
sentences = [sent.text for sent in doc.sents]

sentence_translations = []
for i, sentence in enumerate(sentences):
    sentence_translation = generate_response(sentence)
    sentence_translations.append(sentence_translation)

full_translation = " ".join(sentence_translations)
print(full_translation)