small correction dataset card
Browse files- README.md +1 -1
- tatoeba_mt.py +2 -2
README.md
CHANGED
@@ -159,7 +159,7 @@ The translation task is described in detail in the [Tatoeba-Challenge repository
|
|
159 |
|
160 |
### Languages
|
161 |
|
162 |
-
The data set covers hundreds of languages and language pairs and
|
163 |
|
164 |
|
165 |
## Dataset Structure
|
|
|
159 |
|
160 |
### Languages
|
161 |
|
162 |
+
The data set covers hundreds of languages and language pairs and are organized by ISO-639-3 languages. The current release covers the following languages: Afrikaans, Arabic, Azerbaijani, Belarusian, Bulgarian, Bengali, Breton, Bosnian, Catalan, Chamorro, Czech, Chuvash, Welsh, Danish, German, Modern Greek, English, Esperanto, Spanish, Estonian, Basque, Persian, Finnish, Faroese, French, Western Frisian, Irish, Scottish Gaelic, Galician, Guarani, Hebrew, Hindi, Croatian, Hungarian, Armenian, Interlingua, Indonesian, Interlingue, Ido, Icelandic, Italian, Japanese, Javanese, Georgian, Kazakh, Khmer, Korean, Kurdish, Cornish, Latin, Luxembourgish, Lithuanian, Latvian, Maori, Macedonian, Malayalam, Mongolian, Marathi, Malay, Maltese, Burmese, Norwegian Bokmål, Dutch, Norwegian Nynorsk, Norwegian, Occitan, Polish, Portuguese, Quechua, Rundi, Romanian, Russian, Serbo-Croatian, Slovenian, Albanian, Serbian, Swedish, Swahili, Tamil, Telugu, Thai, Turkmen, Tagalog, Turkish, Tatar, Uighur, Ukrainian, Urdu, Uzbek, Vietnamese, Volapük, Yiddish, Chinese
|
163 |
|
164 |
|
165 |
## Dataset Structure
|
tatoeba_mt.py
CHANGED
@@ -121,7 +121,7 @@ class tatoebaMT(datasets.GeneratorBasedBuilder):
|
|
121 |
test = datasets.SplitGenerator(
|
122 |
name=datasets.Split.TEST,
|
123 |
gen_kwargs={
|
124 |
-
"filepath": data_dir["test"]
|
125 |
}
|
126 |
)
|
127 |
output.append(test)
|
@@ -130,7 +130,7 @@ class tatoebaMT(datasets.GeneratorBasedBuilder):
|
|
130 |
valid = datasets.SplitGenerator(
|
131 |
name=datasets.Split.VALIDATION,
|
132 |
gen_kwargs={
|
133 |
-
"filepath": data_dir["validation"]
|
134 |
}
|
135 |
)
|
136 |
output.append(valid)
|
|
|
121 |
test = datasets.SplitGenerator(
|
122 |
name=datasets.Split.TEST,
|
123 |
gen_kwargs={
|
124 |
+
"filepath": data_dir["test"]
|
125 |
}
|
126 |
)
|
127 |
output.append(test)
|
|
|
130 |
valid = datasets.SplitGenerator(
|
131 |
name=datasets.Split.VALIDATION,
|
132 |
gen_kwargs={
|
133 |
+
"filepath": data_dir["validation"]
|
134 |
}
|
135 |
)
|
136 |
output.append(valid)
|