Datasets:
license: cc-by-4.0
task_categories:
- audio-classification
language:
- en
tags:
- speech
- speech-classifiation
- text-to-speech
- spoofing
- accents
pretty_name: ARCTIC-HS
size_categories:
- 10K<n<100K
ARCTIC-HS
An extension of the CMU_ARCTIC and L2-ARCTIC datasets for synthetic speech detection using text-to-speech, featured in the paper Synthetic speech detection with Wav2Vec 2.0 in various language settings.
This dataset is 1 of 3 used in the paper, the others being:
- FLEURS-HS
- the default train, dev and test sets
- FLEURS-HS VITS
- test set containing (generally) more difficult synthetic samples
- separated due to different licensing
Dataset Details
Dataset Description
The dataset features 3 parts obtained from the 2 original datasets:
- CMU non-native US English speakers
- CMU native US English speakers
- L2 non-native English speakers
The original ARCTIC samples are used as human
samples, while synthetic
samples are generated using Google Cloud Text-To-Speech.
The resulting symmetric
datasets features exactly twice the samples of the original ones, but we also provide:
human samples that couldn't be paired
- 4 speakers in entirety we couldn't pair with a TTS voice
- a small amount of utterances unrelated to the A and B ARCTIC samples
synthetic samples that couldn't be paired
- mostly when a human speaker didn't read the B ARCTIC samples
Curated by: KONTXT by RealNetworks
Funded by: RealNetworks
Language(s) (NLP): English
License: Apache 2.0 for the code, CC BY 4.0 for the dataset, however:
- the human part of the dataset is under a custom CMU license
- it should be compatible with CC BY 4.0
- the human part of the L2 dataset is under CC BY-NC 4.0
- the human part of the dataset is under a custom CMU license
Dataset Sources
The original ARCTIC sets were downloaded from their original sources.
CMU_ARCTIC Repository: festvox.org
L2-ARCTIC Repository: tamu.edu
CMU_ARCTIC Paper: cmu.edu
L2-ARCTIC Paper: tamu.edu
Paper: Synthetic speech detection with Wav2Vec 2.0 in various language settings
Uses
This dataset is best used as a test set for accents. Each sample contains an Audio
feature, and a label: human
or synthetic
.
Direct Use
The following snippet of code demonstrates loading the CMU non-native US English speaker part of the dataset:
from datasets import load_dataset
arctic_hs = load_dataset(
"realnetworks-kontxt/arctic-hs",
"cmu_non-us",
split="test",
trust_remote_code=True,
)
To load a different part, change cmu_non-us
into one of the following:
cmu_us
for CMU native US English speakersl2
for L2 non-native speakers
This dataset only has a test
split.
To load only the paired samples, append _symmetric
to the name. For example, cmu_non-us
will load the test set also containing human and synthetic samples without their counterpart, while cmu_non-us_symmetric
will only load samples where there is both a human and synthetic variant. This is useful if you want to have perfectly balanced labels within speakers, and if you wish to exclude speakers for which there are no TTS counterparts at all.
The trust_remote_code=True
parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the loading script.
Dataset Structure
The dataset data is contained in the data directory.
There exists 1 directory per part.
Within those directories, there are 2 further directories:
splits
pairs
Within the splits
folder, there is 1 file per split:
test.tar.gz
Those .tar.gz
files contain 2 directories:
human
synthetic
Each of these directories contain .wav
files. Keep in mind that these directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. human/arctic_a0001.wav
and synthetic/arctic_a0001.wav
.
The pairs
folder contains a list of file names within each speaker, and whether or not there is a human-synthetic pair. Based on that metadata we determine which samples appear in symmetric
datasets.
Back to the part directories, each contain 2 metadata files, which are not used in the loaded dataset, but might be useful to researchers:
speaker-metadata.csv
- contains the speaker IDs paired with their speech properties
voice-metadata.csv
- contains speaker-TTS name pairs
Finally, the data
root contains a single metadata file, prompts.csv
, which as the name would suggest, contains the prompt transcripts. The only samples for which there are no transcripts are the ARCTIC-C ones, for which we couldn't find a source in the internet.
Sample
A sample contains contains an Audio feature audio
, and a string label
.
{
'audio': {
'path': 'ahw/human/arctic_a0001.wav',
'array': array([0., 0., 0., ..., 0., 0., 0.]),
'sampling_rate': 16000
},
'label': 'human'
}
Citation
The dataset is featured alongside our paper, Synthetic speech detection with Wav2Vec 2.0 in various language settings, which will be published on IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). We'll provide links once it's available online.
BibTeX:
Note, the following BibTeX is incomplete - we'll update it once the actual one is known.
@inproceedings{dropuljic-ssdww2v2ivls
author={Dropuljić, Branimir and Šuflaj, Miljenko and Jertec, Andrej and Obadić, Leo}
booktitle={2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW)}
title={Synthetic speech detection with Wav2Vec 2.0 in various language settings}
year={2024}
volume={}
number={}
pages={1-5}
keywords={Synthetic speech detection;text-to-speech;wav2vec 2.0;spoofing attack;multilingualism}
doi={}
}