Jeronymous
commited on
Commit
•
bf92c75
1
Parent(s):
69a7f53
Update README.md (#2)
Browse files- Update README.md (0f7aae243f443f52bce22359c0efd749f940766b)
README.md
CHANGED
@@ -54,7 +54,7 @@ configs:
|
|
54 |
path: data/train/*
|
55 |
- config_name: example
|
56 |
data_files:
|
57 |
-
- split:
|
58 |
path: data/example/*
|
59 |
task_categories:
|
60 |
- automatic-speech-recognition
|
@@ -63,9 +63,11 @@ language:
|
|
63 |
- fr
|
64 |
---
|
65 |
|
66 |
-
|
|
|
|
|
67 |
|
68 |
-
|
69 |
|
70 |
Data from the `dev` and `test` splits have been manually transcribed and aligned and so are suitable for the evaluation of automatic speech recognition and voice activity detection models.
|
71 |
|
@@ -75,6 +77,9 @@ The audio and transcripts used to evaluate this pipeline, a subset of the `dev`
|
|
75 |
|
76 |
The full dataset is described in Hunter et al. (2024): "SUMM-RE: A corpus of French meeting-style conversations".
|
77 |
|
|
|
|
|
|
|
78 |
|
79 |
|
80 |
## Dataset Description
|
@@ -82,14 +87,14 @@ The full dataset is described in Hunter et al. (2024): "SUMM-RE: A corpus of Fre
|
|
82 |
The SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017).
|
83 |
|
84 |
Each conversation lasts roughly 20 minutes. The number of conversations contained in each split is as follows:
|
85 |
-
- `dev`: 36 (x ~20 minutes = ~12 hours)
|
86 |
- `train`: 210 (x ~20 minutes = ~70 hours)
|
|
|
87 |
- `test`: 37 (x ~20 minutes = ~12.3 hours)
|
88 |
|
89 |
|
90 |
-
Each conversation contains 3-4 speakers (and in rare cases, 2) and each participant has an individual microphone and associated
|
91 |
-
- `dev`: 130
|
92 |
- `train`: 684
|
|
|
93 |
- `test`: 124
|
94 |
|
95 |
|
@@ -100,25 +105,13 @@ Each conversation contains 3-4 speakers (and in rare cases, 2) and each particip
|
|
100 |
- **License:** CC BY-SA 4.0
|
101 |
|
102 |
|
103 |
-
|
104 |
-
|
105 |
-
## Uses
|
106 |
-
|
107 |
-
### Direct Use
|
108 |
-
|
109 |
-
The `dev` and `test` splits of SUMM-RE can be used for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French. All splits can be used for the training of large language models.
|
110 |
-
|
111 |
-
### Out-of-Scope Use
|
112 |
-
|
113 |
-
Due to its size, the corpus is not suitable for training of automatic speech recognition or voice activity detection models.
|
114 |
-
|
115 |
## Dataset Structure
|
116 |
|
117 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
118 |
|
119 |
To visualize an example from the corpus, select the "example" split in the Dataset Viewer.
|
120 |
|
121 |
-
The corpus contains the following information for each
|
122 |
- **meeting_id**, e.g. 001a_PARL, includes:
|
123 |
- experiment number, e.g. 001
|
124 |
- meeting order: a|b|c (there were three meetings per experiment)
|
@@ -128,11 +121,9 @@ The corpus contains the following information for each .wav file:
|
|
128 |
- recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)
|
129 |
- **speaker_id**
|
130 |
- **audio_id**: meeting_id + speaker_id
|
131 |
-
- **audio**: the
|
132 |
-
- **segments**: a list of dictionaries where each entry provides the transcription of a segment with timestamps for the segment and each word that it contains:
|
133 |
-
|
134 |
-
```python
|
135 |
-
|
136 |
[
|
137 |
{
|
138 |
"start": 0.5,
|
@@ -154,33 +145,39 @@ The corpus contains the following information for each .wav file:
|
|
154 |
...
|
155 |
]
|
156 |
```
|
157 |
-
- **transcript**: a string formed by concatenating the text from all of the segments
|
158 |
|
159 |
## Example Use
|
160 |
|
161 |
To load the full dataset
|
162 |
|
163 |
```python
|
164 |
-
|
165 |
import datasets
|
166 |
|
167 |
ds = datasets.load_dataset("linagora/SUMM-RE")
|
168 |
```
|
169 |
|
170 |
-
Use streaming to avoid downloading the full dataset:
|
171 |
|
172 |
```python
|
173 |
-
|
174 |
import datasets
|
175 |
|
176 |
-
|
177 |
|
178 |
-
for sample in
|
179 |
print(sample)
|
180 |
break
|
181 |
```
|
182 |
|
183 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
|
185 |
## Dataset Creation
|
186 |
|
|
|
54 |
path: data/train/*
|
55 |
- config_name: example
|
56 |
data_files:
|
57 |
+
- split: train
|
58 |
path: data/example/*
|
59 |
task_categories:
|
60 |
- automatic-speech-recognition
|
|
|
63 |
- fr
|
64 |
---
|
65 |
|
66 |
+
_Note: if the data viewer is not working, use the "example" subset._
|
67 |
+
|
68 |
+
# SUMM-RE
|
69 |
|
70 |
+
SUMM-RE is a collection of transcripts of French conversations, aligned with the audio signal.
|
71 |
|
72 |
Data from the `dev` and `test` splits have been manually transcribed and aligned and so are suitable for the evaluation of automatic speech recognition and voice activity detection models.
|
73 |
|
|
|
77 |
|
78 |
The full dataset is described in Hunter et al. (2024): "SUMM-RE: A corpus of French meeting-style conversations".
|
79 |
|
80 |
+
The `dev` and `test` splits of SUMM-RE can be used for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French.
|
81 |
+
Speaker diarization can also be evaluated if several tracks of a same meeting are merged together.
|
82 |
+
SUMM-RE transcripts can be used for the training of language models.
|
83 |
|
84 |
|
85 |
## Dataset Description
|
|
|
87 |
The SUMM-RE dataset is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017).
|
88 |
|
89 |
Each conversation lasts roughly 20 minutes. The number of conversations contained in each split is as follows:
|
|
|
90 |
- `train`: 210 (x ~20 minutes = ~70 hours)
|
91 |
+
- `dev`: 36 (x ~20 minutes = ~12 hours)
|
92 |
- `test`: 37 (x ~20 minutes = ~12.3 hours)
|
93 |
|
94 |
|
95 |
+
Each conversation contains 3-4 speakers (and in rare cases, 2) and each participant has an individual microphone and associated audio track, giving rise to the following number of tracks for each split:
|
|
|
96 |
- `train`: 684
|
97 |
+
- `dev`: 130
|
98 |
- `test`: 124
|
99 |
|
100 |
|
|
|
105 |
- **License:** CC BY-SA 4.0
|
106 |
|
107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
108 |
## Dataset Structure
|
109 |
|
110 |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
111 |
|
112 |
To visualize an example from the corpus, select the "example" split in the Dataset Viewer.
|
113 |
|
114 |
+
The corpus contains the following information for each audio track:
|
115 |
- **meeting_id**, e.g. 001a_PARL, includes:
|
116 |
- experiment number, e.g. 001
|
117 |
- meeting order: a|b|c (there were three meetings per experiment)
|
|
|
121 |
- recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)
|
122 |
- **speaker_id**
|
123 |
- **audio_id**: meeting_id + speaker_id
|
124 |
+
- **audio**: the audio track for an individual speaker
|
125 |
+
- **segments**: a list of dictionaries where each entry provides the transcription of a segment with timestamps for the segment and each word that it contains. An example is:
|
126 |
+
```json
|
|
|
|
|
127 |
[
|
128 |
{
|
129 |
"start": 0.5,
|
|
|
145 |
...
|
146 |
]
|
147 |
```
|
148 |
+
- **transcript**: a string formed by concatenating the text from all of the segments (note that those transcripts implicitly include periods of silence where other speakers are speaking in other audio tracks)
|
149 |
|
150 |
## Example Use
|
151 |
|
152 |
To load the full dataset
|
153 |
|
154 |
```python
|
|
|
155 |
import datasets
|
156 |
|
157 |
ds = datasets.load_dataset("linagora/SUMM-RE")
|
158 |
```
|
159 |
|
160 |
+
Use `streaming` option to avoid downloading the full dataset, when only a split is required:
|
161 |
|
162 |
```python
|
|
|
163 |
import datasets
|
164 |
|
165 |
+
devset = datasets.load_dataset("linagora/SUMM-RE", split="dev", streaming=True)
|
166 |
|
167 |
+
for sample in devset:
|
168 |
print(sample)
|
169 |
break
|
170 |
```
|
171 |
|
172 |
+
Load a tiny subset of the data to explore the structure:
|
173 |
+
```python
|
174 |
+
|
175 |
+
import datasets
|
176 |
+
|
177 |
+
ds = datasets.load_dataset("linagora/SUMM-RE", "example")
|
178 |
+
|
179 |
+
print(ds["train"][0])
|
180 |
+
```
|
181 |
|
182 |
## Dataset Creation
|
183 |
|