Datasets:
LIUM
/

sanchit-gandhi HF staff commited on
Commit
d21d158
1 Parent(s): b013c71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -8
README.md CHANGED
@@ -68,16 +68,25 @@ The audio and transcriptions are in English, as per the TED talks at http://www.
68
  ## Dataset Structure
69
 
70
  ### Data Instances
71
-
72
- TODO
73
-
 
 
 
 
 
 
 
 
74
  ### Data Fields
75
 
76
- - gender: an integer value corresponding to the gender of the speaker.
 
 
 
77
  - id: unique id of the data sample.
78
  - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
79
- - speech: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
80
- - text: the transcription of the audio file.
81
 
82
  ### Data Splits
83
 
@@ -100,11 +109,11 @@ TED-LIUM was built during [The International Workshop on Spoken Language Trans-
100
 
101
  #### Initial Data Collection and Normalization
102
 
103
- The data was obtained from publicly availably TED talks at http://www.ted.com. Proper alignments between the speech and the transcribed text were generated using an in-house speaker segmentation and clustering tool (LIUM_SpkDiarization). Speech disfluencies (e.g. repetitions, hesitations, false starts) were treated in the following way: the repetitions were transcribed, the hesitations were mapped to a specific filler word and the false starts were not taken into account. For full details on the data collection and processing, refer to the [TED-LIUM paper](https://aclanthology.org/L12-1405/).
104
 
105
  #### Who are the source language producers?
106
 
107
- [Needs More Information]
108
 
109
  ### Annotations
110
 
 
68
  ## Dataset Structure
69
 
70
  ### Data Instances
71
+ ```
72
+ {'audio': {'path': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/stm/PaulaScher_2008P.stm',
73
+ 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
74
+ 0.00091553, 0.00085449], dtype=float32),
75
+ 'sampling_rate': 16000},
76
+ 'text': '{COUGH} but <sil> i was so {COUGH} utterly unqualified for(2) this project and {NOISE} so utterly ridiculous {SMACK} and ignored the brief {SMACK} <sil>',
77
+ 'speaker_id': 'PaulaScher_2008P',
78
+ 'gender': 'female',
79
+ 'file': '/home/sanchitgandhi/cache/downloads/extracted/6e3655f9e735ae3c467deed1df788e0dabd671c1f3e2e386e30aa3b571bd9761/TEDLIUM_release1/train/stm/PaulaScher_2008P.stm',
80
+ 'id': 'PaulaScher_2008P-1003.35-1011.16-<o,f0,female>'}
81
+ ```
82
  ### Data Fields
83
 
84
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
85
+ - file: A path to the downloaded audio file in .sth format.
86
+ - text: the transcription of the audio file.
87
+ - gender: the gender of the speaker. One of: male, female or N/A.
88
  - id: unique id of the data sample.
89
  - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
 
 
90
 
91
  ### Data Splits
92
 
 
109
 
110
  #### Initial Data Collection and Normalization
111
 
112
+ The data was obtained from publicly available TED talks at http://www.ted.com. Proper alignments between the speech and the transcribed text were generated using an in-house speaker segmentation and clustering tool (LIUM_SpkDiarization). Speech disfluencies (e.g. repetitions, hesitations, false starts) were treated in the following way: the repetitions were transcribed, the hesitations were mapped to a specific filler word and the false starts were not taken into account. For full details on the data collection and processing, refer to the [TED-LIUM paper](https://aclanthology.org/L12-1405/).
113
 
114
  #### Who are the source language producers?
115
 
116
+ TED Talks are influential videos from expert speakers on education, business, science, tech and creativity.
117
 
118
  ### Annotations
119