Datasets:
Languages:
English
ArXiv:
Tags:
query-by-example-spoken-term-detection
audio-slot-filling
speaker-diarization
automatic-speaker-verification
License:
Update files from the datasets library (from 1.13.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.13.0
- README.md +76 -17
- dataset_infos.json +1 -1
- superb.py +207 -7
README.md
CHANGED
@@ -87,6 +87,37 @@ Automatic Speech Recognition (ASR) transcribes utterances into words. While PR a
|
|
87 |
|
88 |
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC)
|
89 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
#### qbe
|
91 |
|
92 |
Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms.
|
@@ -189,7 +220,7 @@ An example from each split looks like:
|
|
189 |
```python
|
190 |
{
|
191 |
'file': '/path/yes/af7a8296_nohash_1.wav',
|
192 |
-
'label': 'yes'
|
193 |
}
|
194 |
```
|
195 |
|
@@ -200,8 +231,16 @@ An example from each split looks like:
|
|
200 |
|
201 |
#### ic
|
202 |
|
203 |
-
|
204 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
205 |
|
206 |
#### sf
|
207 |
|
@@ -210,8 +249,12 @@ An example from each split looks like:
|
|
210 |
|
211 |
#### si
|
212 |
|
213 |
-
|
214 |
-
|
|
|
|
|
|
|
|
|
215 |
|
216 |
#### asv
|
217 |
|
@@ -260,17 +303,24 @@ An example from each split looks like:
|
|
260 |
#### ks
|
261 |
|
262 |
- `file` (`string`): Path to the WAV audio file.
|
263 |
-
- `label` (`
|
|
|
264 |
|
265 |
#### qbe
|
266 |
|
267 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
268 |
|
269 |
-
|
270 |
#### ic
|
271 |
|
272 |
-
|
273 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
274 |
|
275 |
#### sf
|
276 |
|
@@ -279,8 +329,9 @@ An example from each split looks like:
|
|
279 |
|
280 |
#### si
|
281 |
|
282 |
-
|
283 |
-
|
|
|
284 |
|
285 |
#### asv
|
286 |
|
@@ -301,7 +352,9 @@ The data fields in all splits are:
|
|
301 |
|
302 |
#### er
|
303 |
|
304 |
-
|
|
|
|
|
305 |
|
306 |
### Data Splits
|
307 |
|
@@ -329,8 +382,9 @@ The data fields in all splits are:
|
|
329 |
|
330 |
#### ic
|
331 |
|
332 |
-
|
333 |
-
|
|
|
334 |
|
335 |
#### sf
|
336 |
|
@@ -339,8 +393,9 @@ The data fields in all splits are:
|
|
339 |
|
340 |
#### si
|
341 |
|
342 |
-
|
343 |
-
|
|
|
344 |
|
345 |
#### asv
|
346 |
|
@@ -357,7 +412,11 @@ The data is split into "train", "dev" and "test" sets, each containing the follo
|
|
357 |
|
358 |
#### er
|
359 |
|
360 |
-
|
|
|
|
|
|
|
|
|
361 |
|
362 |
## Dataset Creation
|
363 |
|
|
|
87 |
|
88 |
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and inference time are all crucial. SUPERB uses the widely used [Speech Commands dataset v1.0](https://www.tensorflow.org/datasets/catalog/speech_commands) for the task. The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the false positive. The evaluation metric is accuracy (ACC)
|
89 |
|
90 |
+
##### Example of usage:
|
91 |
+
|
92 |
+
Use these auxillary functions to:
|
93 |
+
- load the audio file into an audio data array
|
94 |
+
- sample from long `_silence_` audio clips
|
95 |
+
|
96 |
+
For other examples of handling long `_silence_` clips see the [S3PRL](https://github.com/s3prl/s3prl/blob/099ce807a6ffa6bf2482ceecfcaf83dea23da355/s3prl/downstream/speech_commands/dataset.py#L80)
|
97 |
+
or [TFDS](https://github.com/tensorflow/datasets/blob/6b8cfdb7c3c0a04e731caaa8660ce948d0a67b1e/tensorflow_datasets/audio/speech_commands.py#L143) implementations.
|
98 |
+
|
99 |
+
```python
|
100 |
+
def map_to_array(example):
|
101 |
+
import soundfile as sf
|
102 |
+
|
103 |
+
speech_array, sample_rate = sf.read(example["file"])
|
104 |
+
example["speech"] = speech_array
|
105 |
+
example["sample_rate"] = sample_rate
|
106 |
+
return example
|
107 |
+
|
108 |
+
|
109 |
+
def sample_noise(example):
|
110 |
+
# Use this function to extract random 1 sec slices of each _silence_ utterance,
|
111 |
+
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
|
112 |
+
from random import randint
|
113 |
+
|
114 |
+
if example["label"] == "_silence_":
|
115 |
+
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
|
116 |
+
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
|
117 |
+
|
118 |
+
return example
|
119 |
+
```
|
120 |
+
|
121 |
#### qbe
|
122 |
|
123 |
Query by Example Spoken Term Detection (QbE) detects a spoken term (query) in an audio database (documents) by binary discriminating a given pair of query and document into a match or not. The English subset in [QUESST 2014 challenge](https://github.com/s3prl/s3prl/tree/master/downstream#qbe-query-by-example-spoken-term-detection) is adopted since we focus on investigating English as the first step. The evaluation metric is maximum term weighted value (MTWV) which balances misses and false alarms.
|
|
|
220 |
```python
|
221 |
{
|
222 |
'file': '/path/yes/af7a8296_nohash_1.wav',
|
223 |
+
'label': 0 # 'yes'
|
224 |
}
|
225 |
```
|
226 |
|
|
|
231 |
|
232 |
#### ic
|
233 |
|
234 |
+
```python
|
235 |
+
{
|
236 |
+
'file': "/path/wavs/speakers/2BqVo8kVB2Skwgyb/063aa8f0-4479-11e9-a9a5-5dbec3b8816a.wav",
|
237 |
+
'speaker_id': '2BqVo8kVB2Skwgyb',
|
238 |
+
'text': 'Turn the bedroom lights off',
|
239 |
+
'action': 3, # 'deactivate'
|
240 |
+
'object': 7, # 'lights'
|
241 |
+
'location': 0 # 'bedroom'
|
242 |
+
}
|
243 |
+
```
|
244 |
|
245 |
#### sf
|
246 |
|
|
|
249 |
|
250 |
#### si
|
251 |
|
252 |
+
```python
|
253 |
+
{
|
254 |
+
'file': '/path/wav/id10003/na8-QEFmj44/00003.wav',
|
255 |
+
'label': 2 # 'id10003'
|
256 |
+
}
|
257 |
+
```
|
258 |
|
259 |
#### asv
|
260 |
|
|
|
303 |
#### ks
|
304 |
|
305 |
- `file` (`string`): Path to the WAV audio file.
|
306 |
+
- `label` (`ClassLabel`): Label of the spoken command. Possible values:
|
307 |
+
- `0: "yes", 1: "no", 2: "up", 3: "down", 4: "left", 5: "right", 6: "on", 7: "off", 8: "stop", 9: "go", 10: "_silence_", 11: "_unknown_"`
|
308 |
|
309 |
#### qbe
|
310 |
|
311 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
312 |
|
|
|
313 |
#### ic
|
314 |
|
315 |
+
- `file` (`string`): Path to the WAV audio file.
|
316 |
+
- `speaker_id` (`string`): ID of the speaker.
|
317 |
+
- `text` (`string`): Transcription of the spoken command.
|
318 |
+
- `action` (`ClassLabel`): Label of the command's action. Possible values:
|
319 |
+
- `0: "activate", 1: "bring", 2: "change language", 3: "deactivate", 4: "decrease", 5: "increase"`
|
320 |
+
- `object` (`ClassLabel`): Label of the command's object. Possible values:
|
321 |
+
- `0: "Chinese", 1: "English", 2: "German", 3: "Korean", 4: "heat", 5: "juice", 6: "lamp", 7: "lights", 8: "music", 9: "newspaper", 10: "none", 11: "shoes", 12: "socks", 13: "volume"`
|
322 |
+
- `location` (`ClassLabel`): Label of the command's location. Possible values:
|
323 |
+
- `0: "bedroom", 1: "kitchen", 2: "none", 3: "washroom"`
|
324 |
|
325 |
#### sf
|
326 |
|
|
|
329 |
|
330 |
#### si
|
331 |
|
332 |
+
- `file` (`string`): Path to the WAV audio file.
|
333 |
+
- `label` (`ClassLabel`): Label (ID) of the speaker. Possible values:
|
334 |
+
- `0: "id10001", 1: "id10002", 2: "id10003", ..., 1250: "id11251"`
|
335 |
|
336 |
#### asv
|
337 |
|
|
|
352 |
|
353 |
#### er
|
354 |
|
355 |
+
- `file` (`string`): Path to the WAV audio file.
|
356 |
+
- `label` (`ClassLabel`): Label of the speech emotion. Possible values:
|
357 |
+
- `0: "neu", 1: "hap", 2: "ang", 3: "sad"`
|
358 |
|
359 |
### Data Splits
|
360 |
|
|
|
382 |
|
383 |
#### ic
|
384 |
|
385 |
+
| | train | validation | test |
|
386 |
+
|----|------:|-----------:|-----:|
|
387 |
+
| ic | 23132 | 3118 | 3793 |
|
388 |
|
389 |
#### sf
|
390 |
|
|
|
393 |
|
394 |
#### si
|
395 |
|
396 |
+
| | train | validation | test |
|
397 |
+
|----|-------:|-----------:|-----:|
|
398 |
+
| si | 138361 | 6904 | 8251 |
|
399 |
|
400 |
#### asv
|
401 |
|
|
|
412 |
|
413 |
#### er
|
414 |
|
415 |
+
The data is split into 5 sets intended for 5-fold cross-validation:
|
416 |
+
|
417 |
+
| | session1 | session2 | session3 | session4 | session5 |
|
418 |
+
|----|---------:|---------:|---------:|---------:|---------:|
|
419 |
+
| er | 1085 | 1023 | 1151 | 1031 | 1241 |
|
420 |
|
421 |
## Dataset Creation
|
422 |
|
dataset_infos.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"asr": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "http://www.openslr.org/12", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "int64", "id": null, "_type": "Value"}, "chapter_id": {"dtype": "int64", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_file_path_column": "file", "transcription_column": "text"}], "builder_name": "superb", "config_name": "asr", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11852430, "num_examples": 28539, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 897213, "num_examples": 2703, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 871234, "num_examples": 2620, "dataset_name": "superb"}}, "download_checksums": {"http://www.openslr.org/resources/12/dev-clean.tar.gz": {"num_bytes": 337926286, "checksum": "76f87d090650617fca0cac8f88b9416e0ebf80350acb97b343a85fa903728ab3"}, "http://www.openslr.org/resources/12/test-clean.tar.gz": {"num_bytes": 346663984, "checksum": "39fde525e59672dc6d1551919b1478f724438a95aa55f874b576be21967e6c23"}, "http://www.openslr.org/resources/12/train-clean-100.tar.gz": {"num_bytes": 6387309499, "checksum": "d4ddd1d5a6ab303066f14971d768ee43278a5f2a0aa43dc716b0e64ecbbbf6e2"}}, "download_size": 7071899769, "post_processing_size": null, "dataset_size": 13620877, "size_in_bytes": 7085520646}, "sd": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/ftshijt/LibriMix", "license": "", "features": {"record_id": {"dtype": "string", "id": null, "_type": "Value"}, "file": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}, "speakers": [{"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "superb", "config_name": "sd", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4622013, "num_examples": 13901, "dataset_name": "superb"}, "dev": {"name": "dev", "num_bytes": 860472, "num_examples": 3014, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 847803, "num_examples": 3002, "dataset_name": "superb"}}, "download_checksums": {"https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/reco2dur": {"num_bytes": 540906, "checksum": "879dca4b1108c93bd86df879463fca15a4de42a0f95a7e6987138dc6029b5554"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/segments": {"num_bytes": 5723993, "checksum": "f19cb0ecc342f8d2cd855118879a111822d7cf55fcd078ef156f5147233a8e11"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/utt2spk": {"num_bytes": 3165995, "checksum": "a4295726caf05d72f5ad24706180b9dbccffe6c0c2fc0128ca4b02b7b828a28a"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/wav.zip": {"num_bytes": 5706733518, "checksum": "4231070427ffbc9b3bddae874dba32f3985a0db0b0feb4dfa29ed4d1d11bf41b"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/reco2dur": {"num_bytes": 115918, "checksum": "a30fd59ad01db0315a82cad7a64baea009e6c2bcdfb6b2501bc8873ede72de06"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/segments": {"num_bytes": 673006, "checksum": "2b977917e7ab9feec03afb4fd6a4662df90e48dbcc42977a4b9c89c8d40432ee"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/utt2spk": {"num_bytes": 374794, "checksum": "9f47a7bed76e7a03e57d66ba9cc5f57d85d91f748d0b1eb20301d09e6c24cd20"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/wav.zip": {"num_bytes": 765594100, "checksum": "e28b3422ce59e2a5273be924e6ed6b8f115c0983db1997e56441973c27ee1cd8"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/reco2dur": {"num_bytes": 113357, "checksum": "6e013d917015031e2f1383871b52dfc1122e7b16cdee53bd8e5e0a7fbc57e406"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/segments": {"num_bytes": 650742, "checksum": "92f8de0f56c55a34e9111542c24ea13f2d2efaf9ebe64af31250cadab020f987"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/utt2spk": {"num_bytes": 361548, "checksum": "19dcb558aa886f0d553d8d9b8735ea1998b83e96d5245e5511cb732c84625ffd"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/wav.zip": {"num_bytes": 706322334, "checksum": "9c8ee97d3068759c0101bf88684abab77183374dbb3bb40f7c0b25d385992ea6"}}, "download_size": 7190370211, "post_processing_size": null, "dataset_size": 6330288, "size_in_bytes": 7196700499}, "ks": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://www.tensorflow.org/datasets/catalog/speech_commands", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 12, "names": ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go", "_silence_", "_unknown_"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "label"}, "task_templates": null, "builder_name": "superb", "config_name": "ks", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8467781, "num_examples": 51094, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 1126476, "num_examples": 6798, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 510619, "num_examples": 3081, "dataset_name": "superb"}}, "download_checksums": {"http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz": {"num_bytes": 1489096277, "checksum": "743935421bb51cccdb6bdd152e04c5c70274e935c82119ad7faeec31780d811d"}, "http://download.tensorflow.org/data/speech_commands_test_set_v0.01.tar.gz": {"num_bytes": 71271436, "checksum": "baa084f6b62c91de660ff0588ae4dfc4e4d534aa99ac0e5f406cba75836cbd00"}}, "download_size": 1560367713, "post_processing_size": null, "dataset_size": 10104876, "size_in_bytes": 1570472589}}
|
|
|
1 |
+
{"asr": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .wav format and is not converted to a float32 array. To\nconvert the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "http://www.openslr.org/12", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "int64", "id": null, "_type": "Value"}, "chapter_id": {"dtype": "int64", "id": null, "_type": "Value"}, "id": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "text"}, "task_templates": [{"task": "automatic-speech-recognition", "audio_file_path_column": "file", "transcription_column": "text"}], "builder_name": "superb", "config_name": "asr", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11852430, "num_examples": 28539, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 897213, "num_examples": 2703, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 871234, "num_examples": 2620, "dataset_name": "superb"}}, "download_checksums": {"http://www.openslr.org/resources/12/dev-clean.tar.gz": {"num_bytes": 337926286, "checksum": "76f87d090650617fca0cac8f88b9416e0ebf80350acb97b343a85fa903728ab3"}, "http://www.openslr.org/resources/12/test-clean.tar.gz": {"num_bytes": 346663984, "checksum": "39fde525e59672dc6d1551919b1478f724438a95aa55f874b576be21967e6c23"}, "http://www.openslr.org/resources/12/train-clean-100.tar.gz": {"num_bytes": 6387309499, "checksum": "d4ddd1d5a6ab303066f14971d768ee43278a5f2a0aa43dc716b0e64ecbbbf6e2"}}, "download_size": 7071899769, "post_processing_size": null, "dataset_size": 13620877, "size_in_bytes": 7085520646}, "sd": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/ftshijt/LibriMix", "license": "", "features": {"record_id": {"dtype": "string", "id": null, "_type": "Value"}, "file": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}, "speakers": [{"speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "start": {"dtype": "int64", "id": null, "_type": "Value"}, "end": {"dtype": "int64", "id": null, "_type": "Value"}}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "superb", "config_name": "sd", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 4622013, "num_examples": 13901, "dataset_name": "superb"}, "dev": {"name": "dev", "num_bytes": 860472, "num_examples": 3014, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 847803, "num_examples": 3002, "dataset_name": "superb"}}, "download_checksums": {"https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/reco2dur": {"num_bytes": 540906, "checksum": "879dca4b1108c93bd86df879463fca15a4de42a0f95a7e6987138dc6029b5554"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/segments": {"num_bytes": 5723993, "checksum": "f19cb0ecc342f8d2cd855118879a111822d7cf55fcd078ef156f5147233a8e11"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/utt2spk": {"num_bytes": 3165995, "checksum": "a4295726caf05d72f5ad24706180b9dbccffe6c0c2fc0128ca4b02b7b828a28a"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/train/wav.zip": {"num_bytes": 5706733518, "checksum": "4231070427ffbc9b3bddae874dba32f3985a0db0b0feb4dfa29ed4d1d11bf41b"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/reco2dur": {"num_bytes": 115918, "checksum": "a30fd59ad01db0315a82cad7a64baea009e6c2bcdfb6b2501bc8873ede72de06"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/segments": {"num_bytes": 673006, "checksum": "2b977917e7ab9feec03afb4fd6a4662df90e48dbcc42977a4b9c89c8d40432ee"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/utt2spk": {"num_bytes": 374794, "checksum": "9f47a7bed76e7a03e57d66ba9cc5f57d85d91f748d0b1eb20301d09e6c24cd20"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/dev/wav.zip": {"num_bytes": 765594100, "checksum": "e28b3422ce59e2a5273be924e6ed6b8f115c0983db1997e56441973c27ee1cd8"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/reco2dur": {"num_bytes": 113357, "checksum": "6e013d917015031e2f1383871b52dfc1122e7b16cdee53bd8e5e0a7fbc57e406"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/segments": {"num_bytes": 650742, "checksum": "92f8de0f56c55a34e9111542c24ea13f2d2efaf9ebe64af31250cadab020f987"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/utt2spk": {"num_bytes": 361548, "checksum": "19dcb558aa886f0d553d8d9b8735ea1998b83e96d5245e5511cb732c84625ffd"}, "https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/test/wav.zip": {"num_bytes": 706322334, "checksum": "9c8ee97d3068759c0101bf88684abab77183374dbb3bb40f7c0b25d385992ea6"}}, "download_size": 7190370211, "post_processing_size": null, "dataset_size": 6330288, "size_in_bytes": 7196700499}, "ks": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .wav format and is not converted to a float32 array. To\nconvert the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://www.tensorflow.org/datasets/catalog/speech_commands", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 12, "names": ["yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go", "_silence_", "_unknown_"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "label"}, "task_templates": null, "builder_name": "superb", "config_name": "ks", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 8467781, "num_examples": 51094, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 1126476, "num_examples": 6798, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 510619, "num_examples": 3081, "dataset_name": "superb"}}, "download_checksums": {"http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz": {"num_bytes": 1489096277, "checksum": "743935421bb51cccdb6bdd152e04c5c70274e935c82119ad7faeec31780d811d"}, "http://download.tensorflow.org/data/speech_commands_test_set_v0.01.tar.gz": {"num_bytes": 71271436, "checksum": "baa084f6b62c91de660ff0588ae4dfc4e4d534aa99ac0e5f406cba75836cbd00"}}, "download_size": 1560367713, "post_processing_size": null, "dataset_size": 10104876, "size_in_bytes": 1570472589}, "ic": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "speaker_id": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "action": {"num_classes": 6, "names": ["activate", "bring", "change language", "deactivate", "decrease", "increase"], "names_file": null, "id": null, "_type": "ClassLabel"}, "object": {"num_classes": 14, "names": ["Chinese", "English", "German", "Korean", "heat", "juice", "lamp", "lights", "music", "newspaper", "none", "shoes", "socks", "volume"], "names_file": null, "id": null, "_type": "ClassLabel"}, "location": {"num_classes": 4, "names": ["bedroom", "kitchen", "none", "washroom"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "superb", "config_name": "ic", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7071466, "num_examples": 23132, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 953622, "num_examples": 3118, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 1158347, "num_examples": 3793, "dataset_name": "superb"}}, "download_checksums": {"http://fluent.ai:2052/jf8398hf30f0381738rucj3828chfdnchs.tar.gz": {"num_bytes": 1544093324, "checksum": "4376699f7daf134a9fa57a1d880ffcaaf94a3e2551ba0b40ad894d7abb71aacb"}}, "download_size": 1544093324, "post_processing_size": null, "dataset_size": 9183435, "size_in_bytes": 1553276759}, "si": {"description": "Self-supervised learning (SSL) has proven vital for advancing research in\nnatural language processing (NLP) and computer vision (CV). The paradigm\npretrains a shared model on large volumes of unlabeled data and achieves\nstate-of-the-art (SOTA) for various tasks with minimal adaptation. However, the\nspeech processing community lacks a similar setup to systematically explore the\nparadigm. To bridge this gap, we introduce Speech processing Universal\nPERformance Benchmark (SUPERB). SUPERB is a leaderboard to benchmark the\nperformance of a shared model across a wide range of speech processing tasks\nwith minimal architecture changes and labeled data. Among multiple usages of the\nshared model, we especially focus on extracting the representation learned from\nSSL due to its preferable re-usability. We present a simple framework to solve\nSUPERB tasks by learning task-specialized lightweight prediction heads on top of\nthe frozen shared model. Our results demonstrate that the framework is promising\nas SSL representations show competitive generalizability and accessibility\nacross SUPERB tasks. We release SUPERB as a challenge with a leaderboard and a\nbenchmark toolkit to fuel the research in representation learning and general\nspeech processing.\n\nNote that in order to limit the required storage for preparing this dataset, the\naudio is stored in the .flac format and is not converted to a float32 array. To\nconvert, the audio file to a float32 array, please make use of the `.map()`\nfunction as follows:\n\n\n```python\nimport soundfile as sf\n\ndef map_to_array(batch):\n speech_array, _ = sf.read(batch[\"file\"])\n batch[\"speech\"] = speech_array\n return batch\n\ndataset = dataset.map(map_to_array, remove_columns=[\"file\"])\n```\n", "citation": "@article{DBLP:journals/corr/abs-2105-01051,\n author = {Shu{-}Wen Yang and\n Po{-}Han Chi and\n Yung{-}Sung Chuang and\n Cheng{-}I Jeff Lai and\n Kushal Lakhotia and\n Yist Y. Lin and\n Andy T. Liu and\n Jiatong Shi and\n Xuankai Chang and\n Guan{-}Ting Lin and\n Tzu{-}Hsien Huang and\n Wei{-}Cheng Tseng and\n Ko{-}tik Lee and\n Da{-}Rong Liu and\n Zili Huang and\n Shuyan Dong and\n Shang{-}Wen Li and\n Shinji Watanabe and\n Abdelrahman Mohamed and\n Hung{-}yi Lee},\n title = {{SUPERB:} Speech processing Universal PERformance Benchmark},\n journal = {CoRR},\n volume = {abs/2105.01051},\n year = {2021},\n url = {https://arxiv.org/abs/2105.01051},\n archivePrefix = {arXiv},\n eprint = {2105.01051},\n timestamp = {Thu, 01 Jul 2021 13:30:22 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-2105-01051.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html", "license": "", "features": {"file": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 1251, "names": ["id10001", "id10002", "id10003", "id10004", "id10005", "id10006", "id10007", "id10008", "id10009", "id10010", "id10011", "id10012", "id10013", "id10014", "id10015", "id10016", "id10017", "id10018", "id10019", "id10020", "id10021", "id10022", "id10023", "id10024", "id10025", "id10026", "id10027", "id10028", "id10029", "id10030", "id10031", "id10032", "id10033", "id10034", "id10035", "id10036", "id10037", "id10038", "id10039", "id10040", "id10041", "id10042", "id10043", "id10044", "id10045", "id10046", "id10047", "id10048", "id10049", "id10050", "id10051", "id10052", "id10053", "id10054", "id10055", "id10056", "id10057", "id10058", "id10059", "id10060", "id10061", "id10062", "id10063", "id10064", "id10065", "id10066", "id10067", "id10068", "id10069", "id10070", "id10071", "id10072", "id10073", "id10074", "id10075", "id10076", "id10077", "id10078", "id10079", "id10080", "id10081", "id10082", "id10083", "id10084", "id10085", "id10086", "id10087", "id10088", "id10089", "id10090", "id10091", "id10092", "id10093", "id10094", "id10095", "id10096", "id10097", "id10098", "id10099", "id10100", "id10101", "id10102", "id10103", "id10104", "id10105", "id10106", "id10107", "id10108", "id10109", "id10110", "id10111", "id10112", "id10113", "id10114", "id10115", "id10116", "id10117", "id10118", "id10119", "id10120", "id10121", "id10122", "id10123", "id10124", "id10125", "id10126", "id10127", "id10128", "id10129", "id10130", "id10131", "id10132", "id10133", "id10134", "id10135", "id10136", "id10137", "id10138", "id10139", "id10140", "id10141", "id10142", "id10143", "id10144", "id10145", "id10146", "id10147", "id10148", "id10149", "id10150", "id10151", "id10152", "id10153", "id10154", "id10155", "id10156", "id10157", "id10158", "id10159", "id10160", "id10161", "id10162", "id10163", "id10164", "id10165", "id10166", "id10167", "id10168", "id10169", "id10170", "id10171", "id10172", "id10173", "id10174", "id10175", "id10176", "id10177", "id10178", "id10179", "id10180", "id10181", "id10182", "id10183", "id10184", "id10185", "id10186", "id10187", "id10188", "id10189", "id10190", "id10191", "id10192", "id10193", "id10194", "id10195", "id10196", "id10197", "id10198", "id10199", "id10200", "id10201", "id10202", "id10203", "id10204", "id10205", "id10206", "id10207", "id10208", "id10209", "id10210", "id10211", "id10212", "id10213", "id10214", "id10215", "id10216", "id10217", "id10218", "id10219", "id10220", "id10221", "id10222", "id10223", "id10224", "id10225", "id10226", "id10227", "id10228", "id10229", "id10230", "id10231", "id10232", "id10233", "id10234", "id10235", "id10236", "id10237", "id10238", "id10239", "id10240", "id10241", "id10242", "id10243", "id10244", "id10245", "id10246", "id10247", "id10248", "id10249", "id10250", "id10251", "id10252", "id10253", "id10254", "id10255", "id10256", "id10257", "id10258", "id10259", "id10260", "id10261", "id10262", "id10263", "id10264", "id10265", "id10266", "id10267", "id10268", "id10269", "id10270", "id10271", "id10272", "id10273", "id10274", "id10275", "id10276", "id10277", "id10278", "id10279", "id10280", "id10281", "id10282", "id10283", "id10284", "id10285", "id10286", "id10287", "id10288", "id10289", "id10290", "id10291", "id10292", "id10293", "id10294", "id10295", "id10296", "id10297", "id10298", "id10299", "id10300", "id10301", "id10302", "id10303", "id10304", "id10305", "id10306", "id10307", "id10308", "id10309", "id10310", "id10311", "id10312", "id10313", "id10314", "id10315", "id10316", "id10317", "id10318", "id10319", "id10320", "id10321", "id10322", "id10323", "id10324", "id10325", "id10326", "id10327", "id10328", "id10329", "id10330", "id10331", "id10332", "id10333", "id10334", "id10335", "id10336", "id10337", "id10338", "id10339", "id10340", "id10341", "id10342", "id10343", "id10344", "id10345", "id10346", "id10347", "id10348", "id10349", "id10350", "id10351", "id10352", "id10353", "id10354", "id10355", "id10356", "id10357", "id10358", "id10359", "id10360", "id10361", "id10362", "id10363", "id10364", "id10365", "id10366", "id10367", "id10368", "id10369", "id10370", "id10371", "id10372", "id10373", "id10374", "id10375", "id10376", "id10377", "id10378", "id10379", "id10380", "id10381", "id10382", "id10383", "id10384", "id10385", "id10386", "id10387", "id10388", "id10389", "id10390", "id10391", "id10392", "id10393", "id10394", "id10395", "id10396", "id10397", "id10398", "id10399", "id10400", "id10401", "id10402", "id10403", "id10404", "id10405", "id10406", "id10407", "id10408", "id10409", "id10410", "id10411", "id10412", "id10413", "id10414", "id10415", "id10416", "id10417", "id10418", "id10419", "id10420", "id10421", "id10422", "id10423", "id10424", "id10425", "id10426", "id10427", "id10428", "id10429", "id10430", "id10431", "id10432", "id10433", "id10434", "id10435", "id10436", "id10437", "id10438", "id10439", "id10440", "id10441", "id10442", "id10443", "id10444", "id10445", "id10446", "id10447", "id10448", "id10449", "id10450", "id10451", "id10452", "id10453", "id10454", "id10455", "id10456", "id10457", "id10458", "id10459", "id10460", "id10461", "id10462", "id10463", "id10464", "id10465", "id10466", "id10467", "id10468", "id10469", "id10470", "id10471", "id10472", "id10473", "id10474", "id10475", "id10476", "id10477", "id10478", "id10479", "id10480", "id10481", "id10482", "id10483", "id10484", "id10485", "id10486", "id10487", "id10488", "id10489", "id10490", "id10491", "id10492", "id10493", "id10494", "id10495", "id10496", "id10497", "id10498", "id10499", "id10500", "id10501", "id10502", "id10503", "id10504", "id10505", "id10506", "id10507", "id10508", "id10509", "id10510", "id10511", "id10512", "id10513", "id10514", "id10515", "id10516", "id10517", "id10518", "id10519", "id10520", "id10521", "id10522", "id10523", "id10524", "id10525", "id10526", "id10527", "id10528", "id10529", "id10530", "id10531", "id10532", "id10533", "id10534", "id10535", "id10536", "id10537", "id10538", "id10539", "id10540", "id10541", "id10542", "id10543", "id10544", "id10545", "id10546", "id10547", "id10548", "id10549", "id10550", "id10551", "id10552", "id10553", "id10554", "id10555", "id10556", "id10557", "id10558", "id10559", "id10560", "id10561", "id10562", "id10563", "id10564", "id10565", "id10566", "id10567", "id10568", "id10569", "id10570", "id10571", "id10572", "id10573", "id10574", "id10575", "id10576", "id10577", "id10578", "id10579", "id10580", "id10581", "id10582", "id10583", "id10584", "id10585", "id10586", "id10587", "id10588", "id10589", "id10590", "id10591", "id10592", "id10593", "id10594", "id10595", "id10596", "id10597", "id10598", "id10599", "id10600", "id10601", "id10602", "id10603", "id10604", "id10605", "id10606", "id10607", "id10608", "id10609", "id10610", "id10611", "id10612", "id10613", "id10614", "id10615", "id10616", "id10617", "id10618", "id10619", "id10620", "id10621", "id10622", "id10623", "id10624", "id10625", "id10626", "id10627", "id10628", "id10629", "id10630", "id10631", "id10632", "id10633", "id10634", "id10635", "id10636", "id10637", "id10638", "id10639", "id10640", "id10641", "id10642", "id10643", "id10644", "id10645", "id10646", "id10647", "id10648", "id10649", "id10650", "id10651", "id10652", "id10653", "id10654", "id10655", "id10656", "id10657", "id10658", "id10659", "id10660", "id10661", "id10662", "id10663", "id10664", "id10665", "id10666", "id10667", "id10668", "id10669", "id10670", "id10671", "id10672", "id10673", "id10674", "id10675", "id10676", "id10677", "id10678", "id10679", "id10680", "id10681", "id10682", "id10683", "id10684", "id10685", "id10686", "id10687", "id10688", "id10689", "id10690", "id10691", "id10692", "id10693", "id10694", "id10695", "id10696", "id10697", "id10698", "id10699", "id10700", "id10701", "id10702", "id10703", "id10704", "id10705", "id10706", "id10707", "id10708", "id10709", "id10710", "id10711", "id10712", "id10713", "id10714", "id10715", "id10716", "id10717", "id10718", "id10719", "id10720", "id10721", "id10722", "id10723", "id10724", "id10725", "id10726", "id10727", "id10728", "id10729", "id10730", "id10731", "id10732", "id10733", "id10734", "id10735", "id10736", "id10737", "id10738", "id10739", "id10740", "id10741", "id10742", "id10743", "id10744", "id10745", "id10746", "id10747", "id10748", "id10749", "id10750", "id10751", "id10752", "id10753", "id10754", "id10755", "id10756", "id10757", "id10758", "id10759", "id10760", "id10761", "id10762", "id10763", "id10764", "id10765", "id10766", "id10767", "id10768", "id10769", "id10770", "id10771", "id10772", "id10773", "id10774", "id10775", "id10776", "id10777", "id10778", "id10779", "id10780", "id10781", "id10782", "id10783", "id10784", "id10785", "id10786", "id10787", "id10788", "id10789", "id10790", "id10791", "id10792", "id10793", "id10794", "id10795", "id10796", "id10797", "id10798", "id10799", "id10800", "id10801", "id10802", "id10803", "id10804", "id10805", "id10806", "id10807", "id10808", "id10809", "id10810", "id10811", "id10812", "id10813", "id10814", "id10815", "id10816", "id10817", "id10818", "id10819", "id10820", "id10821", "id10822", "id10823", "id10824", "id10825", "id10826", "id10827", "id10828", "id10829", "id10830", "id10831", "id10832", "id10833", "id10834", "id10835", "id10836", "id10837", "id10838", "id10839", "id10840", "id10841", "id10842", "id10843", "id10844", "id10845", "id10846", "id10847", "id10848", "id10849", "id10850", "id10851", "id10852", "id10853", "id10854", "id10855", "id10856", "id10857", "id10858", "id10859", "id10860", "id10861", "id10862", "id10863", "id10864", "id10865", "id10866", "id10867", "id10868", "id10869", "id10870", "id10871", "id10872", "id10873", "id10874", "id10875", "id10876", "id10877", "id10878", "id10879", "id10880", "id10881", "id10882", "id10883", "id10884", "id10885", "id10886", "id10887", "id10888", "id10889", "id10890", "id10891", "id10892", "id10893", "id10894", "id10895", "id10896", "id10897", "id10898", "id10899", "id10900", "id10901", "id10902", "id10903", "id10904", "id10905", "id10906", "id10907", "id10908", "id10909", "id10910", "id10911", "id10912", "id10913", "id10914", "id10915", "id10916", "id10917", "id10918", "id10919", "id10920", "id10921", "id10922", "id10923", "id10924", "id10925", "id10926", "id10927", "id10928", "id10929", "id10930", "id10931", "id10932", "id10933", "id10934", "id10935", "id10936", "id10937", "id10938", "id10939", "id10940", "id10941", "id10942", "id10943", "id10944", "id10945", "id10946", "id10947", "id10948", "id10949", "id10950", "id10951", "id10952", "id10953", "id10954", "id10955", "id10956", "id10957", "id10958", "id10959", "id10960", "id10961", "id10962", "id10963", "id10964", "id10965", "id10966", "id10967", "id10968", "id10969", "id10970", "id10971", "id10972", "id10973", "id10974", "id10975", "id10976", "id10977", "id10978", "id10979", "id10980", "id10981", "id10982", "id10983", "id10984", "id10985", "id10986", "id10987", "id10988", "id10989", "id10990", "id10991", "id10992", "id10993", "id10994", "id10995", "id10996", "id10997", "id10998", "id10999", "id11000", "id11001", "id11002", "id11003", "id11004", "id11005", "id11006", "id11007", "id11008", "id11009", "id11010", "id11011", "id11012", "id11013", "id11014", "id11015", "id11016", "id11017", "id11018", "id11019", "id11020", "id11021", "id11022", "id11023", "id11024", "id11025", "id11026", "id11027", "id11028", "id11029", "id11030", "id11031", "id11032", "id11033", "id11034", "id11035", "id11036", "id11037", "id11038", "id11039", "id11040", "id11041", "id11042", "id11043", "id11044", "id11045", "id11046", "id11047", "id11048", "id11049", "id11050", "id11051", "id11052", "id11053", "id11054", "id11055", "id11056", "id11057", "id11058", "id11059", "id11060", "id11061", "id11062", "id11063", "id11064", "id11065", "id11066", "id11067", "id11068", "id11069", "id11070", "id11071", "id11072", "id11073", "id11074", "id11075", "id11076", "id11077", "id11078", "id11079", "id11080", "id11081", "id11082", "id11083", "id11084", "id11085", "id11086", "id11087", "id11088", "id11089", "id11090", "id11091", "id11092", "id11093", "id11094", "id11095", "id11096", "id11097", "id11098", "id11099", "id11100", "id11101", "id11102", "id11103", "id11104", "id11105", "id11106", "id11107", "id11108", "id11109", "id11110", "id11111", "id11112", "id11113", "id11114", "id11115", "id11116", "id11117", "id11118", "id11119", "id11120", "id11121", "id11122", "id11123", "id11124", "id11125", "id11126", "id11127", "id11128", "id11129", "id11130", "id11131", "id11132", "id11133", "id11134", "id11135", "id11136", "id11137", "id11138", "id11139", "id11140", "id11141", "id11142", "id11143", "id11144", "id11145", "id11146", "id11147", "id11148", "id11149", "id11150", "id11151", "id11152", "id11153", "id11154", "id11155", "id11156", "id11157", "id11158", "id11159", "id11160", "id11161", "id11162", "id11163", "id11164", "id11165", "id11166", "id11167", "id11168", "id11169", "id11170", "id11171", "id11172", "id11173", "id11174", "id11175", "id11176", "id11177", "id11178", "id11179", "id11180", "id11181", "id11182", "id11183", "id11184", "id11185", "id11186", "id11187", "id11188", "id11189", "id11190", "id11191", "id11192", "id11193", "id11194", "id11195", "id11196", "id11197", "id11198", "id11199", "id11200", "id11201", "id11202", "id11203", "id11204", "id11205", "id11206", "id11207", "id11208", "id11209", "id11210", "id11211", "id11212", "id11213", "id11214", "id11215", "id11216", "id11217", "id11218", "id11219", "id11220", "id11221", "id11222", "id11223", "id11224", "id11225", "id11226", "id11227", "id11228", "id11229", "id11230", "id11231", "id11232", "id11233", "id11234", "id11235", "id11236", "id11237", "id11238", "id11239", "id11240", "id11241", "id11242", "id11243", "id11244", "id11245", "id11246", "id11247", "id11248", "id11249", "id11250", "id11251"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": {"input": "file", "output": "label"}, "task_templates": null, "builder_name": "superb", "config_name": "si", "version": {"version_str": "1.9.0", "description": "", "major": 1, "minor": 9, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12729268, "num_examples": 138361, "dataset_name": "superb"}, "validation": {"name": "validation", "num_bytes": 635172, "num_examples": 6904, "dataset_name": "superb"}, "test": {"name": "test", "num_bytes": 759096, "num_examples": 8251, "dataset_name": "superb"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 14123536, "size_in_bytes": 14123536}}
|
superb.py
CHANGED
@@ -16,7 +16,7 @@
|
|
16 |
# Lint as: python3
|
17 |
"""SUPERB: Speech processing Universal PERformance Benchmark."""
|
18 |
|
19 |
-
|
20 |
import glob
|
21 |
import os
|
22 |
import textwrap
|
@@ -81,8 +81,8 @@ benchmark toolkit to fuel the research in representation learning and general
|
|
81 |
speech processing.
|
82 |
|
83 |
Note that in order to limit the required storage for preparing this dataset, the
|
84 |
-
audio is stored in the .
|
85 |
-
convert
|
86 |
function as follows:
|
87 |
|
88 |
|
@@ -105,8 +105,8 @@ class SuperbConfig(datasets.BuilderConfig):
|
|
105 |
def __init__(
|
106 |
self,
|
107 |
features,
|
108 |
-
data_url,
|
109 |
url,
|
|
|
110 |
supervised_keys=None,
|
111 |
task_templates=None,
|
112 |
**kwargs,
|
@@ -154,7 +154,7 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
154 |
"""\
|
155 |
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
|
156 |
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
|
157 |
-
inference time are all crucial. SUPERB uses the widely used
|
158 |
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
|
159 |
false positive. The evaluation metric is accuracy (ACC)"""
|
160 |
),
|
@@ -183,6 +183,65 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
183 |
url="https://www.tensorflow.org/datasets/catalog/speech_commands",
|
184 |
data_url="http://download.tensorflow.org/data/{filename}",
|
185 |
),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
186 |
SuperbConfig(
|
187 |
name="sd",
|
188 |
description=textwrap.dedent(
|
@@ -213,8 +272,62 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
213 |
url="https://github.com/ftshijt/LibriMix",
|
214 |
data_url="https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/{split}/{filename}",
|
215 |
),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
216 |
]
|
217 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
def _info(self):
|
219 |
return datasets.DatasetInfo(
|
220 |
description=_DESCRIPTION,
|
@@ -260,6 +373,34 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
260 |
name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path["test"], "split": "test"}
|
261 |
),
|
262 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
263 |
elif self.config.name == "sd":
|
264 |
splits = ["train", "dev", "test"]
|
265 |
_DL_URLS = {
|
@@ -276,11 +417,20 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
276 |
)
|
277 |
for split in splits
|
278 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
279 |
|
280 |
def _generate_examples(self, archive_path, split=None):
|
281 |
"""Generate examples."""
|
282 |
if self.config.name == "asr":
|
283 |
-
transcripts_glob = os.path.join(archive_path, "LibriSpeech", "
|
284 |
key = 0
|
285 |
for transcript_path in sorted(glob.glob(transcripts_glob)):
|
286 |
transcript_dir_path = os.path.dirname(transcript_path)
|
@@ -311,6 +461,35 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
311 |
else:
|
312 |
label = "_unknown_"
|
313 |
yield key, {"file": audio_file, "label": label}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
314 |
elif self.config.name == "sd":
|
315 |
data = SdData(archive_path)
|
316 |
args = SdArgs()
|
@@ -338,6 +517,27 @@ class Superb(datasets.GeneratorBasedBuilder):
|
|
338 |
"speakers": speakers,
|
339 |
}
|
340 |
key += 1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
341 |
|
342 |
|
343 |
class SdData:
|
@@ -453,7 +653,7 @@ def _get_speakers(rec, data, args):
|
|
453 |
|
454 |
|
455 |
def _split_ks_files(archive_path, split):
|
456 |
-
audio_path = os.path.join(archive_path, "
|
457 |
audio_paths = glob.glob(audio_path)
|
458 |
if split == "test":
|
459 |
# use all available files for the test archive
|
|
|
16 |
# Lint as: python3
|
17 |
"""SUPERB: Speech processing Universal PERformance Benchmark."""
|
18 |
|
19 |
+
import csv
|
20 |
import glob
|
21 |
import os
|
22 |
import textwrap
|
|
|
81 |
speech processing.
|
82 |
|
83 |
Note that in order to limit the required storage for preparing this dataset, the
|
84 |
+
audio is stored in the .wav format and is not converted to a float32 array. To
|
85 |
+
convert the audio file to a float32 array, please make use of the `.map()`
|
86 |
function as follows:
|
87 |
|
88 |
|
|
|
105 |
def __init__(
|
106 |
self,
|
107 |
features,
|
|
|
108 |
url,
|
109 |
+
data_url=None,
|
110 |
supervised_keys=None,
|
111 |
task_templates=None,
|
112 |
**kwargs,
|
|
|
154 |
"""\
|
155 |
Keyword Spotting (KS) detects preregistered keywords by classifying utterances into a predefined set of
|
156 |
words. The task is usually performed on-device for the fast response time. Thus, accuracy, model size, and
|
157 |
+
inference time are all crucial. SUPERB uses the widely used Speech Commands dataset v1.0 for the task.
|
158 |
The dataset consists of ten classes of keywords, a class for silence, and an unknown class to include the
|
159 |
false positive. The evaluation metric is accuracy (ACC)"""
|
160 |
),
|
|
|
183 |
url="https://www.tensorflow.org/datasets/catalog/speech_commands",
|
184 |
data_url="http://download.tensorflow.org/data/{filename}",
|
185 |
),
|
186 |
+
SuperbConfig(
|
187 |
+
name="ic",
|
188 |
+
description=textwrap.dedent(
|
189 |
+
"""\
|
190 |
+
Intent Classification (IC) classifies utterances into predefined classes to determine the intent of
|
191 |
+
speakers. SUPERB uses the Fluent Speech Commands dataset, where each utterance is tagged with three intent
|
192 |
+
labels: action, object, and location. The evaluation metric is accuracy (ACC)."""
|
193 |
+
),
|
194 |
+
features=datasets.Features(
|
195 |
+
{
|
196 |
+
"file": datasets.Value("string"),
|
197 |
+
"speaker_id": datasets.Value("string"),
|
198 |
+
"text": datasets.Value("string"),
|
199 |
+
"action": datasets.ClassLabel(
|
200 |
+
names=["activate", "bring", "change language", "deactivate", "decrease", "increase"]
|
201 |
+
),
|
202 |
+
"object": datasets.ClassLabel(
|
203 |
+
names=[
|
204 |
+
"Chinese",
|
205 |
+
"English",
|
206 |
+
"German",
|
207 |
+
"Korean",
|
208 |
+
"heat",
|
209 |
+
"juice",
|
210 |
+
"lamp",
|
211 |
+
"lights",
|
212 |
+
"music",
|
213 |
+
"newspaper",
|
214 |
+
"none",
|
215 |
+
"shoes",
|
216 |
+
"socks",
|
217 |
+
"volume",
|
218 |
+
]
|
219 |
+
),
|
220 |
+
"location": datasets.ClassLabel(names=["bedroom", "kitchen", "none", "washroom"]),
|
221 |
+
}
|
222 |
+
),
|
223 |
+
supervised_keys=None,
|
224 |
+
url="https://fluent.ai/fluent-speech-commands-a-dataset-for-spoken-language-understanding-research/",
|
225 |
+
data_url="http://fluent.ai:2052/jf8398hf30f0381738rucj3828chfdnchs.tar.gz",
|
226 |
+
),
|
227 |
+
SuperbConfig(
|
228 |
+
name="si",
|
229 |
+
description=textwrap.dedent(
|
230 |
+
"""\
|
231 |
+
Speaker Identification (SI) classifies each utterance for its speaker identity as a multi-class
|
232 |
+
classification, where speakers are in the same predefined set for both training and testing. The widely
|
233 |
+
used VoxCeleb1 dataset is adopted, and the evaluation metric is accuracy (ACC)."""
|
234 |
+
),
|
235 |
+
features=datasets.Features(
|
236 |
+
{
|
237 |
+
"file": datasets.Value("string"),
|
238 |
+
# VoxCeleb1 contains 1251 speaker IDs in range ["id10001",..."id11251"]
|
239 |
+
"label": datasets.ClassLabel(names=[f"id{i + 10001}" for i in range(1251)]),
|
240 |
+
}
|
241 |
+
),
|
242 |
+
supervised_keys=("file", "label"),
|
243 |
+
url="https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox1.html",
|
244 |
+
),
|
245 |
SuperbConfig(
|
246 |
name="sd",
|
247 |
description=textwrap.dedent(
|
|
|
272 |
url="https://github.com/ftshijt/LibriMix",
|
273 |
data_url="https://huggingface.co/datasets/superb/superb-data/resolve/main/sd/{split}/{filename}",
|
274 |
),
|
275 |
+
SuperbConfig(
|
276 |
+
name="er",
|
277 |
+
description=textwrap.dedent(
|
278 |
+
"""\
|
279 |
+
Emotion Recognition (ER) predicts an emotion class for each utterance. The most widely used ER dataset
|
280 |
+
IEMOCAP is adopted, and we follow the conventional evaluation protocol: we drop the unbalanced emotion
|
281 |
+
classes to leave the final four classes with a similar amount of data points and cross-validate on five
|
282 |
+
folds of the standard splits. The evaluation metric is accuracy (ACC)."""
|
283 |
+
),
|
284 |
+
features=datasets.Features(
|
285 |
+
{
|
286 |
+
"file": datasets.Value("string"),
|
287 |
+
"label": datasets.ClassLabel(names=["neu", "hap", "ang", "sad"]),
|
288 |
+
}
|
289 |
+
),
|
290 |
+
supervised_keys=("file", "label"),
|
291 |
+
url="https://sail.usc.edu/iemocap/",
|
292 |
+
),
|
293 |
]
|
294 |
|
295 |
+
@property
|
296 |
+
def manual_download_instructions(self):
|
297 |
+
if self.config.name == "si":
|
298 |
+
return textwrap.dedent(
|
299 |
+
"""
|
300 |
+
Please download the VoxCeleb dataset using the following script,
|
301 |
+
which should create `VoxCeleb1/wav/id*` directories for both train and test speakers`:
|
302 |
+
```
|
303 |
+
mkdir VoxCeleb1
|
304 |
+
cd VoxCeleb1
|
305 |
+
|
306 |
+
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
|
307 |
+
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
|
308 |
+
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
|
309 |
+
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
|
310 |
+
cat vox1_dev* > vox1_dev_wav.zip
|
311 |
+
unzip vox1_dev_wav.zip
|
312 |
+
|
313 |
+
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
|
314 |
+
unzip vox1_test_wav.zip
|
315 |
+
|
316 |
+
# download the official SUPERB train-dev-test split
|
317 |
+
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
|
318 |
+
```"""
|
319 |
+
)
|
320 |
+
elif self.config.name == "er":
|
321 |
+
return textwrap.dedent(
|
322 |
+
"""
|
323 |
+
Please download the IEMOCAP dataset after submitting the request form here:
|
324 |
+
https://sail.usc.edu/iemocap/iemocap_release.htm
|
325 |
+
Having downloaded the dataset you can extract it with `tar -xvzf IEMOCAP_full_release.tar.gz`
|
326 |
+
which should create a folder called `IEMOCAP_full_release`
|
327 |
+
"""
|
328 |
+
)
|
329 |
+
return None
|
330 |
+
|
331 |
def _info(self):
|
332 |
return datasets.DatasetInfo(
|
333 |
description=_DESCRIPTION,
|
|
|
373 |
name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path["test"], "split": "test"}
|
374 |
),
|
375 |
]
|
376 |
+
elif self.config.name == "ic":
|
377 |
+
archive_path = dl_manager.download_and_extract(self.config.data_url)
|
378 |
+
return [
|
379 |
+
datasets.SplitGenerator(
|
380 |
+
name=datasets.Split.TRAIN,
|
381 |
+
gen_kwargs={"archive_path": archive_path, "split": "train"},
|
382 |
+
),
|
383 |
+
datasets.SplitGenerator(
|
384 |
+
name=datasets.Split.VALIDATION,
|
385 |
+
gen_kwargs={"archive_path": archive_path, "split": "valid"},
|
386 |
+
),
|
387 |
+
datasets.SplitGenerator(
|
388 |
+
name=datasets.Split.TEST, gen_kwargs={"archive_path": archive_path, "split": "test"}
|
389 |
+
),
|
390 |
+
]
|
391 |
+
elif self.config.name == "si":
|
392 |
+
manual_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
|
393 |
+
return [
|
394 |
+
datasets.SplitGenerator(
|
395 |
+
name=datasets.Split.TRAIN,
|
396 |
+
gen_kwargs={"archive_path": manual_dir, "split": 1},
|
397 |
+
),
|
398 |
+
datasets.SplitGenerator(
|
399 |
+
name=datasets.Split.VALIDATION,
|
400 |
+
gen_kwargs={"archive_path": manual_dir, "split": 2},
|
401 |
+
),
|
402 |
+
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"archive_path": manual_dir, "split": 3}),
|
403 |
+
]
|
404 |
elif self.config.name == "sd":
|
405 |
splits = ["train", "dev", "test"]
|
406 |
_DL_URLS = {
|
|
|
417 |
)
|
418 |
for split in splits
|
419 |
]
|
420 |
+
elif self.config.name == "er":
|
421 |
+
manual_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
|
422 |
+
return [
|
423 |
+
datasets.SplitGenerator(
|
424 |
+
name=f"session{i}",
|
425 |
+
gen_kwargs={"archive_path": manual_dir, "split": i},
|
426 |
+
)
|
427 |
+
for i in range(1, 6)
|
428 |
+
]
|
429 |
|
430 |
def _generate_examples(self, archive_path, split=None):
|
431 |
"""Generate examples."""
|
432 |
if self.config.name == "asr":
|
433 |
+
transcripts_glob = os.path.join(archive_path, "LibriSpeech", "*", "*", "*", "*.txt")
|
434 |
key = 0
|
435 |
for transcript_path in sorted(glob.glob(transcripts_glob)):
|
436 |
transcript_dir_path = os.path.dirname(transcript_path)
|
|
|
461 |
else:
|
462 |
label = "_unknown_"
|
463 |
yield key, {"file": audio_file, "label": label}
|
464 |
+
elif self.config.name == "ic":
|
465 |
+
root_path = os.path.join(archive_path, "fluent_speech_commands_dataset")
|
466 |
+
csv_path = os.path.join(root_path, "data", f"{split}_data.csv")
|
467 |
+
with open(csv_path, encoding="utf-8") as csv_file:
|
468 |
+
csv_reader = csv.reader(csv_file, delimiter=",", skipinitialspace=True)
|
469 |
+
next(csv_reader)
|
470 |
+
for row in csv_reader:
|
471 |
+
key, file_path, speaker_id, text, action, object_, location = row
|
472 |
+
yield key, {
|
473 |
+
"file": os.path.join(root_path, file_path),
|
474 |
+
"speaker_id": speaker_id,
|
475 |
+
"text": text,
|
476 |
+
"action": action,
|
477 |
+
"object": object_,
|
478 |
+
"location": location,
|
479 |
+
}
|
480 |
+
elif self.config.name == "si":
|
481 |
+
wav_path = os.path.join(archive_path, "wav")
|
482 |
+
splits_path = os.path.join(archive_path, "veri_test_class.txt")
|
483 |
+
with open(splits_path, "r", encoding="utf-8") as f:
|
484 |
+
for key, line in enumerate(f):
|
485 |
+
split_id, file_path = line.strip().split(" ")
|
486 |
+
if int(split_id) != split:
|
487 |
+
continue
|
488 |
+
speaker_id = file_path.split("/")[0]
|
489 |
+
yield key, {
|
490 |
+
"file": os.path.join(wav_path, file_path),
|
491 |
+
"label": speaker_id,
|
492 |
+
}
|
493 |
elif self.config.name == "sd":
|
494 |
data = SdData(archive_path)
|
495 |
args = SdArgs()
|
|
|
517 |
"speakers": speakers,
|
518 |
}
|
519 |
key += 1
|
520 |
+
elif self.config.name == "er":
|
521 |
+
root_path = os.path.join(archive_path, f"Session{split}")
|
522 |
+
wav_path = os.path.join(root_path, "sentences", "wav")
|
523 |
+
labels_path = os.path.join(root_path, "dialog", "EmoEvaluation", "*.txt")
|
524 |
+
emotions = ["neu", "hap", "ang", "sad", "exc"]
|
525 |
+
key = 0
|
526 |
+
for labels_file in sorted(glob.glob(labels_path)):
|
527 |
+
with open(labels_file, "r", encoding="utf-8") as f:
|
528 |
+
for line in f:
|
529 |
+
if line[0] != "[":
|
530 |
+
continue
|
531 |
+
_, filename, emo, _ = line.split("\t")
|
532 |
+
if emo not in emotions:
|
533 |
+
continue
|
534 |
+
wav_subdir = filename.rsplit("_", 1)[0]
|
535 |
+
filename = f"{filename}.wav"
|
536 |
+
yield key, {
|
537 |
+
"file": os.path.join(wav_path, wav_subdir, filename),
|
538 |
+
"label": emo.replace("exc", "hap"),
|
539 |
+
}
|
540 |
+
key += 1
|
541 |
|
542 |
|
543 |
class SdData:
|
|
|
653 |
|
654 |
|
655 |
def _split_ks_files(archive_path, split):
|
656 |
+
audio_path = os.path.join(archive_path, "**", "*.wav")
|
657 |
audio_paths = glob.glob(audio_path)
|
658 |
if split == "test":
|
659 |
# use all available files for the test archive
|