Datasets:
File size: 6,877 Bytes
0180667 8e9b991 0180667 1546e6b 8e9b991 1546e6b 8e9b991 1546e6b 8e9b991 1546e6b 8e9b991 1546e6b 8e9b991 26ad28a 8e9b991 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
license: cc-by-4.0
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- de
multilinguality:
- monolingual
task_categories:
- automatic-speech-recognition
size_categories: 10K<n<100K
pretty_name: Open speech data for German speech recognition
configs:
- config_name: default
data_files:
- split: train
path: splits/train/*.arrow
- split: test
path: splits/test/*.arrow
- split: dev
path: splits/dev/*.arrow
---
# Open speech data for German speech recognition
Language Technology, Universität Hamburg, Germany
https://www.inf.uni-hamburg.de/en/inst/ab/lt (formerly TU-Darmstadt)
https://www.lt.tu-darmstadt.de
Telecooperation labs, TU-Darmstadt, Germany
https://www.tk.informatik.tu-darmstadt.de
## General information
- The speech data was collected in a controlled environment (same room, same microphone distances, etc. )
- Distance between speakers and the microphones is about 1 meter
- Each recording has includes speaker meta data
- The recordings include several concurrent audio-streams from different microphones
- The data is curated (manually checked and corrected), to reduce errors and artefacts
- The speech data is divided into three independent data sets: Training / Test / Dev, Test and Dev contains new sentences and new speakers that are not part of training set, in order to assess model quality in a speaker-independent open-vocabulary setting.
## Information about the data collection procedure:
Training set (recordings in 2014):
The sentences come from two main data sources: German Wikipedia (Spring 2014) and from the Europarl Corpus. Sentences were randomly chosen from German Wikipedia and Europarl Corpus, to be read by the speakers. The Europarl corpus (Release v7) is a collection of the proceedings of the European Parliament between 1996 and 2011, generated by Philipp Koehn (Europarl: A Parallel Corpus for Statistical Machine Translation, Philipp Koehn, MT Summit 2005, http://www.statmt.org/europarl/). As third data set, German command and control sentences, were manually specified and would be typical for a command and control setting in living rooms.
For the test/dev set (recordings in 2015):
Additional sentences from the German Wikipedia and from the Europarl Corpus have selected for the recordings. Additionally, we collected German sentences from the web by crawling the German top-level-domain and applying language filtering and deduplification. Exclusively sentences starting with quotation marks were selected and randomly sampled. The three text sources are represented with approximately equal amounts of recordings in the test/dev set.
## Microphones
These are the microphones used for the recordings with additional information about the setting. Not every recording is avaliable with every microphone.
- Kinect-Beam (Kinect 1 Beamformed Audio signal through Kinect SDK)
- Kinect-RAW (Kinect 1 Direct Access as normal microphone)
- Realtek (Internal Realtek Mic of Asus PC - near noisy fan)
- Samson (Samson C01U)
- Yamaha (Yamaha PSG-01S)
If you want to use only one of the microphones the dataset can be filtered:
```
from datasets import load_dataset
dataset = load_dataset("uhh-lt/Tuda-De")
ds = dataset["train"].filter(lambda x: x["microphone"] == "Kinect-Beam")
```
## Structure of file names:
The metadata includes the sentence with the original text representation taken from the various text corpora and a cleaned version, where the sentence is normalised to resemble what speakers actually said as closely as possible.
## Metadata
For every recording additional metadata is avaliable:
```
'path': '2015-01-27-13-39-45_Kinect-Beam.wav'
'array': array([ 0., 0., 0., ..., -0.00015259,
-0.0005188 , 0.])
'sampling_rate': 16000}
'gender': 'male'
'microphone': 'Kinect-Beam'
'speaker_id': '4a831451-c0c2-44e7-b425-4467307a29e6'
'angle': '0'
'ageclass': '41-50'
'sentence_id': '1643'
'sentence': 'Wie ich mir das Glück vorstelle'
'cleaned_sentence': 'Wie ich mir das Glück vorstelle'
'corpus': 'CITE'
'muttersprachler': 'Ja'
'bundesland': 'Hessen'
'source': "['http://www.kulturradio.de/programm/sendungen/140125/kulturtermin_1904.htm/listall=true/printView=true.html', 'http://www.kulturradio.de/programm/sendungen/140125/kulturtermin_1904.htm/suggestion=true.html', 'http://www.kulturradio.de/programm/sendungen/140125/kulturtermin_1904.html']
```
In `speaker_id` is a unique and anoymized ID for the speaker who read the sentence. Some meta data like gender, ageclass, German native speaker or not (Muttersprachler) and state (Bundesland) are also available. Most speakers are from Hesse (Hessen) and are between 21-30 years old.
We kept the raw sentence in `sentence` and have included the normalised version of the sentence in `cleaned_sentence`, where most notably literals and numbers are canonicalized to their full written forms and any punctuation is discarded. The normalised form should be used for training acoustic models.
There are four possible text sources, `corpus` states from which corpus the utterance was selected: "WIKI" for the German wikipedia, "PARL" for the European Parliament Proceedings Parallel Corpus, see http://www.statmt.org/europarl/, "Commands" for short commands typical of a command and control setting and "CITE" for crawled citations of direct speech. In `source`, if avaiable, is one or more URLs pointing to the source document(s) of the utterance.
## Split structure:
Train: includes the recordings for the training set. Sentences in this folders were recorded during 2014. Some sentences are allowed to be recorded several times - by the same speaker or by a different speaker.
Test / Dev: includes recordings for the test and dev set. These sentences only occur once, there is no overlap with sentences in Train and the Test / Dev recordings were conducted with a different set of speakers. Each sentence in Test / Dev is unique, i.e. just recorded once by one speaker.
Number of recorded sentences:
- Train: 14717
- Dev: 1085
- Test: 1028
## Errors
You can also report transcription and normalization errors of this corpus in the issue tracker of this project.
## Citations
```
@InProceedings{Radeck-Arneth2015,
author="Radeck-Arneth, Stephan
and Milde, Benjamin
and Lange, Arvid
and Gouv{\^e}a, Evandro
and Radomski, Stefan
and M{\"u}hlh{\"a}user, Max
and Biemann, Chris",
title="Open Source German Distant Speech Recognition: Corpus and Acoustic Model",
booktitle="Text, Speech, and Dialogue",
year="2015",
publisher="Springer International Publishing",
address = {Pilsen, Czech Republic},
pages="480--488",
isbn="978-3-319-24033-6",
doi="10.1007/978-3-319-24033-6_54"
}
``` |