File size: 13,583 Bytes
4044a09 ca914aa 4044a09 ca914aa 4044a09 ca914aa 4044a09 ca914aa 4044a09 7ee5956 4044a09 77cb3f4 ca914aa 77cb3f4 ca914aa 77cb3f4 ca914aa 4044a09 bf92c75 ca914aa 4044a09 bf92c75 4044a09 546d47e 22832d3 546d47e 4044a09 22832d3 4044a09 22832d3 4044a09 bf92c75 4044a09 572372b 546d47e bf92c75 546d47e 572372b bf92c75 546d47e 4044a09 bf92c75 4044a09 bf92c75 88aaa9b 4044a09 bf92c75 4044a09 7ee5956 4044a09 7ee5956 3f9e066 45790f4 6b5492d bf92c75 6b5492d bf92c75 89392d5 6b5492d 45790f4 bf92c75 89392d5 bf92c75 4044a09 47cb4ec 4044a09 bf13266 4044a09 22832d3 4044a09 ca914aa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 |
---
license: cc-by-sa-4.0
dataset_info:
features:
- name: meeting_id
dtype: string
- name: speaker_id
dtype: string
- name: audio_id
dtype: string
- name: audio
dtype: audio
- name: segments
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: transcript
dtype: string
- name: words
list:
- name: end
dtype: float64
- name: start
dtype: float64
- name: word
dtype: string
- name: transcript
dtype: string
splits:
- name: dev
num_bytes: 14155765669
num_examples: 130
- name: train
num_bytes: 74754662936
num_examples: 684
- name: test
num_bytes: 13775584735
num_examples: 124
download_size: 120234623488
dataset_size: 102802035597
configs:
- config_name: default
data_files:
- split: dev
path: data/dev/*
- split: test
path: data/test/*
- split: train
path: data/train/*
- config_name: example
data_files:
- split: train
path: data/example/*
task_categories:
- automatic-speech-recognition
- voice-activity-detection
language:
- fr
---
_Note: if the data viewer is not working, use the "example" subset._
# SUMM-RE
The SUMM-RE dataset is a collection of transcripts of French conversations, aligned with the audio signal.
It is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017).
The full dataset is described in Hunter et al. (2024): "SUMM-RE: A corpus of French meeting-style conversations".
- **Created by:** Recording and manual correction of the corpus was carried out by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.
- **Funded by:** The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).
- **Shared by:** LINAGORA (coordinator of the SUMM-RE project)
- **Language:** French
- **License:** CC BY-SA 4.0
## Dataset Description
Data from the `dev` and `test` splits have been manually transcribed and aligned.
Data from the `train` split has been automatically transcribed and aligned with the Whisper pipeline described in Yamasaki et al. (2023): "Transcribing And Aligning Conversational Speech: A Hybrid Pipeline Applied To French Conversations".
The audio and transcripts used to evaluate this pipeline, a subset of the `dev` split<sup>(*)</sup>, can be found on [Ortolang](https://www.ortolang.fr/market/corpora/summ-re-asru/).
The `dev` and `test` splits of SUMM-RE can be used for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.
Speaker diarization can also be evaluated if several tracks of a same meeting are merged together.
SUMM-RE transcripts can be used for the training of language models.
Each conversation lasts roughly 20 minutes. The number of conversations contained in each split is as follows:
- `train`: 210 (x ~20 minutes = ~67 hours)
- `dev`: 36 (x ~20 minutes = ~12 hours)
- `test`: 37 (x ~20 minutes = ~12 hours)
Each conversation contains 3-4 speakers (and in rare cases, 2) and each participant has an individual microphone and associated audio track, giving rise to the following number of tracks for each split:
- `train`: 684 (x ~20 minutes = ~226 hours)
- `dev`: 130 (x ~20 minutes = ~43 hours)
- `test`: 124 (x ~20 minutes = ~41 hours)
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
To visualize an example from the corpus, select the "example" split in the Dataset Viewer.
The corpus contains the following information for each audio track:
- **meeting_id**, e.g. 001a_PARL, includes:
- experiment number, e.g. 001
- meeting order: a|b|c (there were three meetings per experiment)
- experiment type: E (experiment) | P (pilot experiment)
- scenario/topic: A|B|C|D|E
- meeting type: R (reporting) | D (decision) | P (planning)
- recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)
- **speaker_id**
- **audio_id**: meeting_id + speaker_id
- **audio**: the audio track for an individual speaker
- **segments**: a list of dictionaries where each entry provides the transcription of a segment with timestamps for the segment and each word that it contains. An example is:
```json
[
{
"start": 0.5,
"end": 1.2,
"transcript": "bonjour toi",
"words": [
{
"word": "bonjour",
"start": 0.5,
"end": 0.9
}
{
"word": "toi",
"start": 0.9,
"end": 1.2
}
]
},
...
]
```
- **transcript**: a string formed by concatenating the text from all of the segments (note that those transcripts implicitly include periods of silence where other speakers are speaking in other audio tracks)
## Example Use
To load the full dataset
```python
import datasets
ds = datasets.load_dataset("linagora/SUMM-RE")
```
Use the `streaming` option to avoid downloading the full dataset, when only a split is required:
```python
import datasets
devset = datasets.load_dataset("linagora/SUMM-RE", split="dev", streaming=True)
for sample in devset:
...
```
Load some short extracts of the data to explore the structure:
```python
import datasets
ds = datasets.load_dataset("linagora/SUMM-RE", "example")
sample = ds["train"][0]
print(sample)
```
## Dataset Creation
### Curation Rationale
The full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. This version is an extract of the full corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
For details, see [Hunter et al. (2024)](https://hal.science/hal-04623038/).
#### Audio Sampling Rates
By default, files recorded through Zoom have a sampling rate of 32000 and other files have a sampling rate of 48000. The sampling rates for exception files are as follows:
44100 = ['071*']
32000 = ['101*']
22050 = ['018a_EARZ_055.wav', '018a_EARZ_056.wav', '018a_EARZ_057.wav', '018a_EARZ_058.wav', '020b_EBDZ_017.wav', '020b_EBDZ_053.wav', '020b_EBDZ_057.wav', '020b_EBDZ_063.wav', '027a_EBRH_025.wav', '027a_EBRH_075.wav', '027a_EBRH_078.wav', '032b_EADH_084.wav', '032b_EADH_085.wav', '032b_EADH_086.wav', '032b_EADH_087.wav', '033a_EBRH_091.wav', '033c_EBPH_092.wav', '033c_EBPH_093.wav', '033c_EBPH_094.wav', '034a_EBRH_095.wav', '034a_EBRH_096.wav', '034a_EBRH_097.wav', '034a_EBRH_098.wav', '035b_EADH_088.wav', '035b_EADH_096.wav', '035b_EADH_097.wav', '035b_EADH_098.wav', '036c_EAPH_091.wav', '036c_EAPH_092.wav', '036c_EAPH_093.wav', '036c_EAPH_099.wav', '069c_EEPL_156.wav', '069c_EEPL_157.wav', '069c_EEPL_158.wav', '069c_EEPL_159.wav']
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Corpus design and production:
- University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy.
- LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception)
Corpus participants:
- Participants for the in-person conversations were recruited on the University of Aix-Marseille campus.
- Participants for the Zoom meetings were recruited through [Prolific](https://www.prolific.com/).
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
Transcripts are not punctuated and all words are in lower case.
Annotations follow the conventions laid out in chapter 3 of [The SPPAS Book](https://sppas.org/book_03_annotations.html) by Brigitte Bigi. Transcripts may therefore contain additional annotations in the following contexts:
* truncated words, noted as a - at the end of the token string (an ex- example);
* noises, noted by a * (not available for some languages);
* laughter, noted by a @ (not available for some languages);
* short pauses, noted by a +;
* elisions, mentioned in parentheses;
* specific pronunciations, noted with brackets [example,eczap];
* comments are preferably noted inside braces {this is a comment!};
* comments can be noted inside brackets without using comma [this and this];
* liaisons, noted between = (this =n= example);
* morphological variants with <ice scream,I scream>,
* proper name annotation, like $ John S. Doe $.
Note that the symbols * + @ must be surrounded by whitespace.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Principal annotator for `dev`: Océane Granier
Principal annotators for `test`: Eliane Bailly, Manon Méaume, Lyne Rahabi, Lucille Rico
Additional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
A portion of the `dev`split has been (semi-automatically) anonymized for the pipeline described in Yamasaki et al. (2023).
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
## Citations
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
Please cite the papers below if using the dataset in your work.
**Description of the full dataset:**
Julie Hunter, Hiroyoshi Yamasaki, Océane Granier, Jérôme Louradour, Roxane Bertrand, Kate Thompson and Laurent Prévot (2024): "[SUMM-RE: A corpus of French meeting-style conversations](https://hal.science/hal-04623038/)," TALN 2024.
```bibtex
@inproceedings{hunter2024summre,
title={SUMM-RE: A corpus of French meeting-style conversations},
author={Hunter, Julie and Yamasaki, Hiroyoshi and Granier, Oc{\'e}ane and Louradour, J{\'e}r{\^o}me and Bertrand, Roxane and Thompson, Kate and Pr{\'e}vot, Laurent},
booktitle={Actes de JEP-TALN-RECITAL 2024. 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position},
pages={508--529},
year={2024},
organization={ATALA \& AFPC}
}
```
**The Whisper Pipeline:**
Hiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): "[Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations](https://hal.science/hal-04404777/document)," Workshop on Automatic Speech Recognition and Understanding (ASRU).
```bibtex
@inproceedings{yamasaki2023transcribing,
title={Transcribing And Aligning Conversational Speech: A Hybrid Pipeline Applied To French Conversations},
author={Yamasaki, Hiroyoshi and Louradour, J{\'e}r{\^o}me and Hunter, Julie and Pr{\'e}vot, Laurent},
booktitle={2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
pages={1--6},
year={2023},
organization={IEEE}
}
```
<sup>(*)</sup>The following meetings were used to evaluate the pipeline in Yamasaki et al. (2023):
```python
asru = ['018a_EARZ_055', '018a_EARZ_056', '018a_EARZ_057', '018a_EARZ_058', '020b_EBDZ_017', '020b_EBDZ_053', '020b_EBDZ_057', '020b_EBDZ_063', '027a_EBRH_025', '027a_EBRH_075', '027a_EBRH_078', '032b_EADH_084', '032b_EADH_085', '032b_EADH_086', '032b_EADH_087', '033a_EBRH_091', '033a_EBRH_092', '033a_EBRH_093', '033a_EBRH_094', '033c_EBPH_091', '033c_EBPH_092', '033c_EBPH_093', '033c_EBPH_094', '034a_EBRH_095', '034a_EBRH_096', '034a_EBRH_097', '034a_EBRH_098', '035b_EADH_088', '035b_EADH_096', '035b_EADH_097', '035b_EADH_098', '036c_EAPH_091', '036c_EAPH_092', '036c_EAPH_093', '036c_EAPH_099', '069c_EEPL_156', '069c_EEPL_157', '069c_EEPL_158', '069c_EEPL_159']
```
## Acknowledgements
We gratefully acknowledge support from the Agence Nationale de Recherche for the SUMM-RE project (ANR-20-CE23-0017). |