Enrich model cards with statistics and tts use
#10
by
ylacombe
HF staff
- opened
README.md
CHANGED
@@ -24,6 +24,8 @@ source_datasets:
|
|
24 |
- original
|
25 |
task_categories:
|
26 |
- automatic-speech-recognition
|
|
|
|
|
27 |
---
|
28 |
|
29 |
# Dataset Card for MultiLingual LibriSpeech
|
@@ -66,11 +68,12 @@ This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
|
|
66 |
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
|
67 |
|
68 |
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
|
69 |
-
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
|
70 |
|
71 |
### Supported Tasks and Leaderboards
|
72 |
|
73 |
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
|
|
|
74 |
|
75 |
### Languages
|
76 |
|
@@ -160,7 +163,7 @@ A typical data point comprises the path to the audio file, usually called `file`
|
|
160 |
|
161 |
### Data Splits
|
162 |
|
163 |
-
|
|
164 |
| ----- | ------ | ----- | ---- | ---- | ---- |
|
165 |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
|
166 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
|
@@ -170,8 +173,6 @@ A typical data point comprises the path to the audio file, usually called `file`
|
|
170 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
|
171 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
|
172 |
|
173 |
-
|
174 |
-
|
175 |
## Dataset Creation
|
176 |
|
177 |
### Curation Rationale
|
@@ -238,7 +239,48 @@ Public Domain, Creative Commons Attribution 4.0 International Public License ([C
|
|
238 |
}
|
239 |
```
|
240 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
241 |
### Contributions
|
242 |
|
243 |
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
|
244 |
-
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
|
|
24 |
- original
|
25 |
task_categories:
|
26 |
- automatic-speech-recognition
|
27 |
+
- text-to-speech
|
28 |
+
- text-to-audio
|
29 |
---
|
30 |
|
31 |
# Dataset Card for MultiLingual LibriSpeech
|
|
|
68 |
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
|
69 |
|
70 |
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
|
71 |
+
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages.
|
72 |
|
73 |
### Supported Tasks and Leaderboards
|
74 |
|
75 |
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
|
76 |
+
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
77 |
|
78 |
### Languages
|
79 |
|
|
|
163 |
|
164 |
### Data Splits
|
165 |
|
166 |
+
| Number of samples | Train | Train.9h | Train.1h | Dev | Test |
|
167 |
| ----- | ------ | ----- | ---- | ---- | ---- |
|
168 |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
|
169 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
|
|
|
173 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
|
174 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
|
175 |
|
|
|
|
|
176 |
## Dataset Creation
|
177 |
|
178 |
### Curation Rationale
|
|
|
239 |
}
|
240 |
```
|
241 |
|
242 |
+
|
243 |
+
### Data Statistics
|
244 |
+
|
245 |
+
| Duration (h) | Train | Dev | Test |
|
246 |
+
|--------------|-----------|-------|-------|
|
247 |
+
| English | 44,659.74 | 15.75 | 15.55 |
|
248 |
+
| German | 1,966.51 | 14.28 | 14.29 |
|
249 |
+
| Dutch | 1,554.24 | 12.76 | 12.76 |
|
250 |
+
| French | 1,076.58 | 10.07 | 10.07 |
|
251 |
+
| Spanish | 917.68 | 9.99 | 10 |
|
252 |
+
| Italian | 247.38 | 5.18 | 5.27 |
|
253 |
+
| Portuguese | 160.96 | 3.64 | 3.74 |
|
254 |
+
| Polish | 103.65 | 2.08 | 2.14 |
|
255 |
+
|
256 |
+
| # Speakers | Train | | Dev | | Test | |
|
257 |
+
|------------|-------|------|-----|----|------|----|
|
258 |
+
| Gender | M | F | M | F | M | F |
|
259 |
+
| English | 2742 | 2748 | 21 | 21 | 21 | 21 |
|
260 |
+
| German | 81 | 95 | 15 | 15 | 15 | 15 |
|
261 |
+
| Dutch | 9 | 31 | 3 | 3 | 3 | 3 |
|
262 |
+
| French | 62 | 80 | 9 | 9 | 9 | 9 |
|
263 |
+
| Spanish | 36 | 50 | 10 | 10 | 10 | 10 |
|
264 |
+
| Italian | 22 | 43 | 5 | 5 | 5 | 5 |
|
265 |
+
| Portuguese | 26 | 16 | 5 | 5 | 5 | 5 |
|
266 |
+
| Polish | 6 | 5 | 2 | 2 | 2 | 2 |
|
267 |
+
|
268 |
+
| # Hours / Gender | Dev | | Test | |
|
269 |
+
|------------------|------|------|------|------|
|
270 |
+
| Gender | M | F | M | F |
|
271 |
+
| English | 7.76 | 7.99 | 7.62 | 7.93 |
|
272 |
+
| German | 7.06 | 7.22 | 7 | 7.29 |
|
273 |
+
| Dutch | 6.44 | 6.32 | 6.72 | 6.04 |
|
274 |
+
| French | 5.13 | 4.94 | 5.04 | 5.02 |
|
275 |
+
| Spanish | 4.91 | 5.08 | 4.78 | 5.23 |
|
276 |
+
| Italian | 2.5 | 2.68 | 2.38 | 2.9 |
|
277 |
+
| Portuguese | 1.84 | 1.81 | 1.83 | 1.9 |
|
278 |
+
| Polish | 1.12 | 0.95 | 1.09 | 1.05 |
|
279 |
+
|
280 |
+
|
281 |
+
|
282 |
+
|
283 |
### Contributions
|
284 |
|
285 |
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
|
286 |
+
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|