Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ widget:
|
|
11 |
---
|
12 |
|
13 |
# t5-small-spanish-nahuatl
|
14 |
-
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is
|
15 |
|
16 |
|
17 |
## Model description
|
|
|
11 |
---
|
12 |
|
13 |
# t5-small-spanish-nahuatl
|
14 |
+
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is challenging due to the lack of structured data. The most popular datasets, such as the Axolot and bible-corpus, only consist of ~16,000 and ~7,000 samples, respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, it is possible to find a single word from the Axolot dataset written in more than three different ways. Therefore, we leverage the T5 text-to-text prefix training strategy in this work to compensate for the lack of data. We first teach the multilingual model Spanish using English, then transition to Spanish-Nahuatl. The resulting model successfully translates short sentences from Spanish to Nahuatl. Finally, we report Chrf and BLEU results.
|
15 |
|
16 |
|
17 |
## Model description
|