Create README.md
Browse files
README.md
CHANGED
@@ -1,25 +1,56 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
- name: text_gl
|
11 |
-
dtype: string
|
12 |
-
- name: text_en
|
13 |
-
dtype: string
|
14 |
-
splits:
|
15 |
-
- name: train
|
16 |
-
num_bytes: 2350499131.6252074
|
17 |
-
num_examples: 3450
|
18 |
-
download_size: 2338691367
|
19 |
-
dataset_size: 2350499131.6252074
|
20 |
-
configs:
|
21 |
-
- config_name: default
|
22 |
-
data_files:
|
23 |
-
- split: train
|
24 |
-
path: data/train-*
|
25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
task_categories:
|
3 |
+
- translation
|
4 |
+
- automatic-speech-recognition
|
5 |
+
language:
|
6 |
+
- gl
|
7 |
+
- en
|
8 |
+
size_categories:
|
9 |
+
- 1K<n<10K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
+
# Dataset Details
|
12 |
+
|
13 |
+
**FLEURS-SpeechT-GL-EN** is Galician-to-English dataset for Speech Translation task.
|
14 |
+
|
15 |
+
This dataset has been compiled from Google's **[FLEURS data set](https://huggingface.co/datasets/google/fleurs)**
|
16 |
+
It contains ~10h11m of galician audios along with its text transcriptions and the correspondant English translations.
|
17 |
+
|
18 |
+
# Preprocessing
|
19 |
+
|
20 |
+
This dataset has been generated based on Google's FLEURS speech dataset, by aligning English and Galician data.
|
21 |
+
The alignment process has been performed following **[ymoslem's FLEURS dataset processing script](https://github.com/ymoslem/Speech/blob/main/FLEURS-GA-EN.ipynb)**
|
22 |
+
|
23 |
+
### English translations quality
|
24 |
+
|
25 |
+
To get a sense of the quality of the english text with respect to the galician transcriptions, a Quality Estimation model has been applied.
|
26 |
+
|
27 |
+
- **QE model**: [Unbabel/wmt23-cometkiwi-da-xl](https://huggingface.co/Unbabel/wmt23-cometkiwi-da-xl)
|
28 |
+
- **Average QE score**: 0.76
|
29 |
+
|
30 |
+
# Dataset Structure
|
31 |
+
|
32 |
+
```
|
33 |
+
DatasetDict({
|
34 |
+
train: Dataset({
|
35 |
+
features: ['id', 'audio', 'text_gl', 'text_en'],
|
36 |
+
num_rows: 3450
|
37 |
+
})
|
38 |
+
})
|
39 |
+
```
|
40 |
+
|
41 |
+
# Citation
|
42 |
+
|
43 |
+
```
|
44 |
+
@article{fleurs2022arxiv,
|
45 |
+
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
|
46 |
+
author = {Conneau, Alexis and Ma, Min and Khanuja, Simran and Zhang, Yu and Axelrod, Vera and Dalmia, Siddharth and Riesa, Jason and Rivera, Clara and Bapna, Ankur},
|
47 |
+
journal={arXiv preprint arXiv:2205.12446},
|
48 |
+
url = {https://arxiv.org/abs/2205.12446},
|
49 |
+
year = {2022},
|
50 |
+
```
|
51 |
+
|
52 |
+
Yasmin Moslem preprocessing script: https://github.com/ymoslem/Speech/blob/main/FLEURS-GA-EN.ipynb
|
53 |
+
|
54 |
+
## Dataset Card Contact
|
55 |
+
|
56 |
+
Juan Julián Cea Morán (jjceamoran@gmail.com)
|