|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- audio-classification |
|
language: |
|
- de |
|
- en |
|
- es |
|
- fr |
|
- it |
|
- nl |
|
- pl |
|
- sv |
|
tags: |
|
- speech |
|
- speech-classifiation |
|
- text-to-speech |
|
- spoofing |
|
- multilingualism |
|
|
|
pretty_name: FLEURS-HS |
|
size_categories: |
|
- 10K<n<100K |
|
--- |
|
|
|
# FLEURS-HS |
|
|
|
An extension of the [FLEURS](https://huggingface.co/datasets/google/fleurs) dataset for synthetic speech detection using text-to-speech, featured in the paper **Synthetic speech detection with Wav2Vec 2.0 in various language settings**. |
|
|
|
This dataset is 1 of 3 used in the paper, the others being: |
|
- [FLEURS-HS VITS](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs-vits) |
|
- test set containing (generally) more difficult synthetic samples |
|
- separated due to different licensing |
|
- [ARCTIC-HS](https://huggingface.co/datasets/realnetworks-kontxt/arctic-hs) |
|
- extension of the [CMU_ARCTIC](http://festvox.org/cmu_arctic/) and [L2-ARCTIC](https://psi.engr.tamu.edu/l2-arctic-corpus/) sets in a similar manner |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
The dataset features 8 languages originally seen in FLEURS: |
|
|
|
- German |
|
- English |
|
- Spanish |
|
- French |
|
- Italian |
|
- Dutch |
|
- Polish |
|
- Swedish |
|
|
|
The original FLEURS samples are used as `human` samples, while `synthetic` samples are generated using: |
|
|
|
- [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech) |
|
- [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech) |
|
- [Amazon Polly](https://aws.amazon.com/polly/) |
|
|
|
The resulting dataset features roughly twice the samples per language (every `human` sample usually has its `synthetic` counterpart). |
|
|
|
|
|
- **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt) |
|
- **Funded by:** [RealNetworks](https://realnetworks.com/) |
|
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Dutch, Polish, Swedish |
|
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) for the code, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) for the dataset |
|
|
|
### Dataset Sources |
|
|
|
The original FLEURS dataset was downloaded from [HuggingFace](https://huggingface.co/datasets/google/fleurs). |
|
|
|
- **FLEURS Repository:** [HuggingFace](https://huggingface.co/datasets/google/fleurs) |
|
- **FLEURS Paper:** [arXiv](https://arxiv.org/abs/2205.12446) |
|
|
|
- **Paper:** Synthetic speech detection with Wav2Vec 2.0 in various language settings |
|
|
|
## Uses |
|
|
|
This dataset is best used to train synthetic speech detection. Each sample contains an `Audio` feature, and a label: `human` or `synthetic`. |
|
|
|
### Direct Use |
|
|
|
The following snippet of code demonstrates loading the training split for English: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
fleurs_hs = load_dataset( |
|
"realnetworks-kontxt/fleurs-hs", |
|
"en_us", |
|
split="train", |
|
trust_remote_code=True, |
|
) |
|
``` |
|
|
|
To load a different language, change `en_us` into one of the following: |
|
- `de_de` for German |
|
- `es_419` for Spanish |
|
- `fr_fr` for French |
|
- `it_it` for Italian |
|
- `nl_nl` for Dutch |
|
- `pl_pl` for Polish |
|
- `sv_se` for Swedish |
|
|
|
To load a different split, change the `split` value to `dev` or `test`. |
|
|
|
The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs.py). |
|
|
|
## Dataset Structure |
|
|
|
The dataset data is contained in the [data directory](https://huggingface.co/datasets/realnetworks-kontxt/fleurs-hs/tree/main/data). |
|
|
|
There exists 1 directory per language. |
|
|
|
Within those directories, there is a directory named `splits`; it contains 1 file per split: |
|
- `train.tar.gz` |
|
- `dev.tar.gz` |
|
- `test.tar.gz` |
|
|
|
Those `.tar.gz` files contain 2 directories: |
|
- `human` |
|
- `synthetic` |
|
|
|
Each of these directories contain the `.wav` files for the label (and split). Keep in mind the the two directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `synthetic/123.wav`. |
|
|
|
Finally, back to the language directory, it contains 4 metadata files, which are not used in the loaded dataset, but might be useful to researchers: |
|
- `recording-metadata.csv` |
|
- contains the transcript ID, file name, split and gender of the original FLEURS samples |
|
- `recording-transcripts.csv` |
|
- contains the transcrpits of the original FLEURS samples |
|
- `voice-distribution.csv` |
|
- contains the TTS vendor, TTS name, TTS engine, FLEURS gender and TTS gender for each ID-file name pair |
|
- useful for tracking what models were used to get specific synthetic samples |
|
- `voice-metadata.csv` |
|
- contains the groupation of TTS' used alongside the splits they were used for |
|
|
|
### Sample |
|
|
|
A sample contains contains an Audio feature `audio`, and a string `label`. |
|
|
|
``` |
|
{ |
|
'audio': { |
|
'path': 'human/10004088536354799741.wav', |
|
'array': array([0., 0., 0., ..., 0., 0., 0.]), |
|
'sampling_rate': 16000 |
|
}, |
|
'label': 'human' |
|
} |
|
``` |
|
|
|
## Citation |
|
|
|
The dataset is featured alongside our paper, **Synthetic speech detection with Wav2Vec 2.0 in various language settings**, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online. |
|
|
|
**BibTeX:** |
|
|
|
Note, the following BibTeX is incomplete - we'll update it once the actual one is known. |
|
|
|
``` |
|
@inproceedings{dropuljic-ssdww2v2ivls |
|
author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo} |
|
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)} |
|
title={Synthetic speech detection with Wav2Vec 2.0 in various language settings} |
|
year={2024} |
|
volume={} |
|
number={} |
|
pages={1-5} |
|
keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism} |
|
doi={} |
|
} |
|
``` |
|
|
|
## Dataset Card Authors |
|
|
|
- [Miljenko Šuflaj](https://huggingface.co/suflaj) |
|
|
|
## Dataset Card Contact |
|
|
|
- [Miljenko Šuflaj](mailto:msuflaj@realnetworks.com) |