milmor's picture
Update README.md
81d6ca1
|
raw
history blame
5.68 kB
metadata
license: apache-2.0
language:
  - es
  - nah
tags:
  - translation
widget:
  - text: 'translate Spanish to Nahuatl: Mi hermano es un ajolote'

t5-small-spanish-nahuatl

Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the task of neural machine tranlation is hard due to the lack of structured data. The most popular datasets such as the Axolot dataset and the bible-corpus only consist of ~16,000 and ~7,000 samples respectivly. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, a single word from the Axolot dataset can be found written in more than three different ways. Therefore, in this work we leverage the T5 text-to-text sufix training strategy to compensate the lack of data. We first teach the multilingual model Spanish using English, then we make the transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. We report Chrf and BLEU results.

Model description

This model is a T5 Transformer (t5-small) fine-tuned on spanish and nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from py-elotl.

Usage

from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer

model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')

model.eval()
sentence = 'muchas flores son blancas'
input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
# outputs = miak xochitl istak
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]

Approach

Dataset

Since the Axolotl corpus contains misaligments, we just select the best samples (12,207 samples). We also use the bible-corpus (7,821 samples).

Axolotl best aligned books
Anales de Tlatelolco
Diario
Documentos nauas de la Ciudad de México del siglo XVI
Historia de México narrada en náhuatl y español
La tinta negra y roja (antología de poesía náhuatl)
Memorial Breve (Libro las ocho relaciones)
Método auto-didáctico náhuatl-español
Nican Mopohua
Quinta Relación (Libro las ocho relaciones)
Recetario Nahua de Milpa Alta D.F
Testimonios de la antigua palabra
Trece Poetas del Mundo Azteca
Una tortillita nomás - Se taxkaltsin saj
Vida económica de Tenochtitlan

Also, to increase the amount of data, we collected 3,000 extra samples from the web.

Model and training

We employ two training-stages using a multilingual T5-small. This model was chosen because it can handle different vocabularies and suffixes. T5-small is pretrained on different tasks and languages (French, Romanian, English, German).

Training-stage 1 (learning Spanish)

In training stage 1 we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge acquired. We use the English-Spanish Anki dataset, which consists of 118.964 text pairs. We train the model till convergence adding the suffix "Translate Spanish to English: ".

Training-stage 2 (learning Nahuatl)

We use the pretrained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add to our dataset 20,000 samples from the English-Spanish Anki dataset. This two-task-training avoids overfitting end makes the model more robust.

Training setup

We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.

Evaluation results

For a fair comparison, the models are evaluated on the same 505 validation Nahuatl sentences. We report the results using chrf and sacrebleu hugging face metrics:

English-Spanish pretraining Validation loss BLEU Chrf
False 1.34 6.17 26.96
True 1.31 6.18 28.21

The English-Spanish pretraining improves BLEU and Chrf, and leads to faster convergence. You can reproduce the evaluation on the eval.ipynb notebook.

References

  • Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified Text-to-Text transformer.

  • Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).

Team members