RedTachyon
commited on
Commit
•
3e095d8
1
Parent(s):
03066d6
Upload folder using huggingface_hub
Browse files- zTNVjQXZyx/15_image_0.png +3 -0
- zTNVjQXZyx/15_image_1.png +3 -0
- zTNVjQXZyx/15_image_2.png +3 -0
- zTNVjQXZyx/1_image_0.png +3 -0
- zTNVjQXZyx/2_image_0.png +3 -0
- zTNVjQXZyx/zTNVjQXZyx.md +534 -0
- zTNVjQXZyx/zTNVjQXZyx_meta.json +25 -0
zTNVjQXZyx/15_image_0.png
ADDED
Git LFS Details
|
zTNVjQXZyx/15_image_1.png
ADDED
Git LFS Details
|
zTNVjQXZyx/15_image_2.png
ADDED
Git LFS Details
|
zTNVjQXZyx/1_image_0.png
ADDED
Git LFS Details
|
zTNVjQXZyx/2_image_0.png
ADDED
Git LFS Details
|
zTNVjQXZyx/zTNVjQXZyx.md
ADDED
@@ -0,0 +1,534 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Textless Low-Resource Speech-To-Speech Translation With Unit Language Models
|
2 |
+
|
3 |
+
Anonymous authors Paper under double-blind review
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Existing speech-to-speech translation models fall into two camps: textless models trained with hundreds of hours of parallel speech data or unsupervised models that leverage text as an intermediate step. Both approaches limit building speech-to-speech translation models for a wide range of languages, as they exclude languages that are primarily spoken and language pairs that lack large-scale parallel speech data. We present a new framework for training textless low-resource speech-to-speech translation (S2ST) systems that only need dozens of hours of parallel speech data. We reformulate S2ST as a unit-to-unit seq2seq translation task, and start by pretraining a model on large-scale monolingual speech data. Then, we finetune it with a small amount of parallel speech data (20 − 60 hours). Lastly, we improve model performance through an unsupervised backtranslation objective. We train and evaluate our models for English-to-German, German-to-English and Marathi-to-English translation on three different domains (European Parliament, Common Voice, and All India Radio) with single-speaker synthesized speech data. Evaluated using the ASR-BLEU metric, our models achieve reasonable performance on all three domains, with some being within 1-2 points of our supervised topline.
|
8 |
+
|
9 |
+
## 1 Introduction
|
10 |
+
|
11 |
+
The speech-to-speech translation (S2ST) task involves translating input speech in the source language to speech in the target language. In many ways, S2ST represents the "holy grail" of translation as it enables natural, realtime, spoken communication. S2ST has a rich history, from cascaded systems combining Automatic Speech Recognition (ASR), Machine Translation (MT), and Text To Speech (TTS) technologies (Nakamura et al.,
|
12 |
+
2006) to recently proposed neural end-to-end systems (Lee et al., 2022a; Seamless Communication et al., 2023)
|
13 |
+
that directly map from input source language speech to output target language speech. S2ST systems (Jia et al., 2019; Lee et al., 2022a;b; Jia et al., 2021; Duquenne et al., 2022; Seamless Communication et al., 2023)
|
14 |
+
have benefited from model and data scaling, leveraging increasing amounts of parallel speech and/or text data across languages. Yet, this is feasible only for a fraction of the world's 7000 languages (Lewis et al.,
|
15 |
+
2016); the majority of world languages have low-resource or no parallel translation data available (Haddow et al., 2022). Furthermore, thousands of languages are primarily spoken without standardized writing systems
|
16 |
+
(about 3000 languages in Ethnologue (Lewis et al., 2016) have no reported writing system), necessitating textless language processing.
|
17 |
+
|
18 |
+
Recent work on textless speech translation (Lee et al., 2022b; Kim et al., 2023) requires large amounts of parallel cross-lingual speech data, making it difficult to adapt for low-resource speech translation. On the other hand, some other papers have proposed approaches for training S2ST models that do not need any parallel speech data at all; however, these approaches either train cascaded models that have intermediate text outputs (Wang et al., 2022a; Fu et al., 2023) or use text supervision during training (Nachmani et al.,
|
19 |
+
2023). As a result, these are difficult to adapt for speech translation on languages (spoken, with non-standard orthographies or poor ASR) that would benefit from purely textless approaches.
|
20 |
+
|
21 |
+
We propose a learning framework that requires a much more modest amount (dozens of hours) of parallel speech data to train a textless speech-to-speech translation model. We begin by pretraining an encoder-decoder
|
22 |
+
|
23 |
+
![1_image_0.png](1_image_0.png)
|
24 |
+
|
25 |
+
| Speech Vocoder | Parallel | |
|
26 |
+
|------------------------|--------------------|-----------|
|
27 |
+
| | Training Data | Is it |
|
28 |
+
| | | Textless? |
|
29 |
+
| Unit Vocoder (HiFiGAN) | Thousands of hours | ✅ |
|
30 |
+
| | � | |
|
31 |
+
| Unsupervised TTS | None � | ❌ ✅ |
|
32 |
+
| Unit Vocoder (HiFiGAN) | 20-60 hours � | |
|
33 |
+
| Target Speech | | |
|
34 |
+
|
35 |
+
Figure 1: Overview of speech-to-speech translation systems. We compare our formulation to two relevant lines of work. We present the first textless speech-to-speech system that does not require a large-scale parallel training dataset.
|
36 |
+
language model over self-supervised speech units using non-parallel speech corpora, followed by finetuning it for S2ST by finetuning on a low-resource parallel S2ST corpus and finally performing unsupervised backtranslation to further improve performance. We achieve this by reformulating S2ST as a unit-to-unit machine translation problem. Figure 1 illustrates our method, comparing it to previous work. Modelling real speech data with speech unit sequences poses challenges, such as inherent unit sequence noise and ambiguity, that are orthogonal to our research questions. Thus, for simplicity, we use single-speaker synthesized speech data to train and evaluate our models, following early S2ST work (Jia et al., 2019).
|
37 |
+
|
38 |
+
We train two English ↔ German S2ST models in the European Parliament (Iranzo-Sánchez et al., 2019) and Common Voice (Ardila et al., 2020) domains and two English ↔ Marathi S2ST models in the European Parliament (Iranzo-Sánchez et al., 2019) and All India Radio (Bhogale et al., 2022) domains, and evaluate the en→de, de→en and mr→en translation directions. We find that with just 20 hrs of parallel en→de and de→en data and 60 hrs of parallel en→mr and mr→en data, our models achievable reasonable performance on all three domains, obtaining ASR-BLEUs of 10.0 (de→en), 8.3 (en→de) and 9.2 (mr→en) for the European Parliament domain, 7.7 (de→en) for the Common Voice domain, and 10.0 (mr→en) for the All India Radio domain. Our results are within 1-2 ASR-BLEU of our high-resource supervised topline for the European Parliament domain for the de→en and mr→en language pairs. We will release code and model weights at the time of publication.
|
39 |
+
|
40 |
+
## 2 Methods
|
41 |
+
|
42 |
+
We represent the input and output speech utterances as discrete unit sequences and train a unit-based encoder-decoder model for the speech-to-speech translation task. Therefore, our pipeline consists of a speech-to-unit encoder (S2U), a unit encoder-decoder (U2U) and a unit-to-speech vocoder (U2S). Of these, S2U and U2S are essentially speech-unit interfaces; we base these largely on prior work (Hsu et al., 2021; Polyak et al., 2021). Our main contribution is the middle unit-based encoder-decoder model (U2U) that is trained for S2ST using our three-step Pretrain-Finetune-Backtranslate approach illustrated in Figure 2. We now describe each of these components below.
|
43 |
+
|
44 |
+
## 2.1 Speech-To-Unit Encoder (S2U)
|
45 |
+
|
46 |
+
We first describe the model we use to map speech waveforms into a sequence of discrete unit representations.
|
47 |
+
|
48 |
+
Past work (Hsu et al., 2021; Chung et al., 2021) has explored learning self-supervised discrete representations of speech. The learned discrete representations, or units, preserve much of the information contained in the
|
49 |
+
|
50 |
+
![2_image_0.png](2_image_0.png)
|
51 |
+
|
52 |
+
Figure 2: Training a unit-based encoder-decoder model for speech-to-speech translation. The first **Pretrain** step trains on large-scale monolingual speech data using a denoising pretraining loss. The second **Finetune**
|
53 |
+
step trains on low-resource (20-60 hours) of parallel speech-speech translation data using a supervised finetuning loss. The third **Backtranslate** step trains using a combination of a round-trip consistency loss
|
54 |
+
(on monolingual data) and the supervised finetuning loss (on parallel data) used in the second step.
|
55 |
+
original input signal (Pasad et al., 2021), including phonemes, word identity, speaker identity, and so forth.
|
56 |
+
|
57 |
+
Critically, text transcriptions or other annotations of the speech are not necessary to discover these units. It has recently become popular in the research community to train autoregressive language models (Lakhotia et al., 2021; Borsos et al., 2022) on these unit representations, enabling NLP tasks to be performed on spoken language without the need to first transcribe speech waveforms into text.
|
58 |
+
|
59 |
+
We base our speech-to-unit encoder on the pre-trained HuBERT (Hsu et al., 2021) base model. As proposed by HuBERT (Hsu et al., 2021), we train a k-means clustering model over HuBERT embeddings at an intermediate layer, choosing the layer index on the basis of the units' PNMI score, a phone-unit mutual information metric. We train a shared English-German k-means model and a separate Marathi k-means model, our best configuration. To convert a speech waveform to a unit sequence, we pass it through HuBERT,
|
60 |
+
extract embeddings at an intermediate layer, use the k-means clustering model to map each timestep's embedding to its nearest cluster center, and apply run-length encoding (collapsing consecutive equal units into one) as in prior work (Lee et al., 2022b). A unit sequence is thus a sequence of integers corresponding to indices of mapped clusters. We also experimented with other models, XLSR (Conneau et al., 2020) and Indic-wav2vec (Javed et al., 2021), but decided to use HuBERT on the basis of its units' high PNMI score.
|
61 |
+
|
62 |
+
We describe training the clustering model and the evaluation of the speech-to-unit encoder in Section 4.1.
|
63 |
+
|
64 |
+
## 2.2 Unit Encoder-Decoder (U2U)
|
65 |
+
|
66 |
+
We train our unit-based encoder-decoder model to perform S2ST using a three-step Pretrain-FinetuneBacktranslate approach visualized in Figure 2. We describe each step in this section, and provide implementation details in Section 4.2.
|
67 |
+
|
68 |
+
Pretrain We initialize the model with mBART-50 (Liu et al., 2020) (a text encoder-decoder model), reinitialize the input and output embedding layers for our new unit vocabulary, and pretrain using their original denoising objective. While we initialize with mBART-50, we feed it unit sequences, which do not exist in the text token space. However, since unit sequences can be treated as text sequences, just with a different vocabulary, we can easily adapt the training pipeline to train on unit sequences rather than text sequences. Given a unit sequence dataset D and a noising function g(·) (we use one that samples contiguous spans and masks them until a fixed ratio of tokens are masked), the decoder is trained to generate the original sequence X given encoder input g(X), optimizing model weights θ as arg minθ PX∈D − log Pr(X|g(X); θ).
|
69 |
+
|
70 |
+
We train two bilingual unit LMs, one for English-German, and one for English-Marathi. They are trained on unit sequences, derived from monolingual speech corpora in the three languages, generated by the respective S2U encoder (shared for English-German and separate for Marathi). We train one Sentencepiece (Kudo & Richardson, 2018) BPE tokenizer per LM to create the vocabulary. Finetune We perform supervised training on the pretrained unit LM using a small parallel S2ST corpus, where the input is a spoken utterance in the source language, and the target is a translated version spoken in the target language. During this finetuning process, we use the standard cross-entropy loss of the decoder generating the target unit sequence, when the ground truth source unit sequence is provided to the encoder.
|
71 |
+
|
72 |
+
Backtranslate Finally, we perform unsupervised backtranslation (Lample et al., 2018) on our finetuned model. We follow the standard recipes used in unsupervised text backtranslation, with minor modifications to stabilize training in the speech domain. We briefly describe the procedure: unsupervised backtranslation trains the model to reconstruct a unit sequence from a model-generated synthetic translation of the same unit sequence using a round-trip translation consistency loss (visualized in Figure 2). For every training step, denoting the model as M,
|
73 |
+
1. Get a batch of utterances in one language, B1, and a batch of utterances in another language, B2.
|
74 |
+
|
75 |
+
2. Use M to translate B1 to translations B′1
|
76 |
+
, and B2 to translations B′2
|
77 |
+
; this step is inference only and no gradient updates occur.
|
78 |
+
|
79 |
+
3. Given B′1
|
80 |
+
, B′2 as input respectively, compute the decoder cross-entropy loss for the model M to reconstruct the original utterances B1, B2. Using this loss, perform a gradient update on M's parameters.
|
81 |
+
|
82 |
+
The above corresponds to online backtranslation, where the 'forward' model (generating the synthetic translation) is the same as the 'backward' model (used to compute the cross-entropy loss). We also explored offline backtranslation, which updates the forward model every epoch, but did not see much difference in performance. Unlike in unsupervised text backtranslation, the training was unstable in both settings. To resolve this, we mix in some supervised data (used in the finetuning step) with online backtranslation during this last stage, which stabilizes learning and shows gains.
|
83 |
+
|
84 |
+
## 2.3 Unit-To-Speech Vocoder (U2S)
|
85 |
+
|
86 |
+
We adapt prior work (Polyak et al., 2021)1 on speech resynthesis from discrete units to build our unit-to-speech vocoder. Given a dataset consisting of speech waveforms and their corresponding unit sequences generated by the S2U encoder, the model trains two submodules; a duration prediction module and a HiFi-GAN (Kong et al., 2020) that converts unit sequences back to speech waveforms. The duration predictor is a two-layer CNN that takes a run-length-encoded unit sequence as an input, predicts the duration of each unit, and repeats each unit to match its predicted duration. The HiFi-GAN generator consists of a sequence of transposed CNNs that take full unit sequences as input and sequentially upsample the sequence to obtain speech waveforms as output. The HiFi-GAN is trained as a GAN with this generator and a set of CNN
|
87 |
+
discriminators. We train separate U2S vocoders for each language (English, German, Marathi).
|
88 |
+
|
89 |
+
## 3 Experimental Setup 3.1 Datasets
|
90 |
+
|
91 |
+
Table 1 summarizes datasets used in our work. For each language pair, we train models on different domains.
|
92 |
+
|
93 |
+
Durations reported for parallel translation datasets correspond to durations of the source speech. More dataset details are in Table 4 of Appendix A. English-German For pretraining, we use the union of the transcribed set of Voxpopuli (Wang et al., 2021)
|
94 |
+
and randomly-sampled subsets of the Europarl v3 (Koehn, 2005) train set that we call Europarl-small and Europarl-mid (refer to Table 4 of Appendix A for statistics), collected from European Parliament recordings.
|
95 |
+
|
96 |
+
For finetuning, we use two datasets: (1) randomly-sampled 20-hr (10-hr per translation direction i.e. en→de and de→en) subset of the Europarl-ST (Iranzo-Sánchez et al., 2019) train set and (2) randomly-sampled 20-hr
|
97 |
+
(10-hr per translation direction) subset of the CVSS (Jia et al., 2022) train set. For the last backtranslation 1https://github.com/facebookresearch/speech-resynthesis/tree/main/examples/speech_to_speech_translation
|
98 |
+
|
99 |
+
| Model Name | Languages | Pretrain | Finetune | Backtranslate | Evaluation |
|
100 |
+
|--------------|-------------|-----------------|----------------|----------------------|--------------------|
|
101 |
+
| MdeEP | de,en | VP (777h) + | EP-ST (20h) | VP (777h) | EP-ST (9h) en↔de |
|
102 |
+
| MdeCV | EP (5381h) | CVSS (20h) | CV (382h) | CVSS (15h) de→en | |
|
103 |
+
| MmrEP | mr,en | VP (529h) + | s-Ep-ST (60hr) | VP (529h) + | s-Ep-ST (9h) mr→en |
|
104 |
+
| MmrShr | Shr (1000h) | s-Shr-ST (60hr) | Shr (1000h) | s-Shr-ST (10h) mr→en | |
|
105 |
+
|
106 |
+
Table 1: Model configurations. For each dataset, we mark their duration in parentheses. Abbreviations: VP
|
107 |
+
= Voxpopuli, EP = Europarl, EP-ST = Europarl-ST, CV = CommonVoice, Shr = Shrutilipi, S-Ep-ST =
|
108 |
+
synth-Europarl-ST, S-Shr-ST = synth-Shrutilipi-ST.
|
109 |
+
|
110 |
+
step, we use Voxpopuli and Common Voice 4 (Ardila et al., 2020) data for the round-trip consistency loss.
|
111 |
+
|
112 |
+
Common Voice and CVSS are collected using the Mozilla Common Voice project and consist of recordings of crowd-sourced workers reading out sentences primarily derived from Wikipedia; thus they do not belong to the European Parliament domain. For evaluation, we use Europarl-ST (Iranzo-Sánchez et al., 2019) (for both de→en and en→de) and CVSS (Jia et al., 2022) (for de→en) test sets. English-Marathi For pretraining, we use the union of the Shrutilipi (Bhogale et al., 2022) transcribed Marathi dataset, collected from All India Radio broadcasts and the English transcribed train set of Voxpopuli. We were unable to find domain-matched speech translation datasets for Marathi-English. Thus, we synthetically generate parallel datasets by translating the source language utterance to target language utterance using the Google Translate API2. An author of this paper, who speaks both Marathi and English, manually checked a few utterances and found the translations to be of high quality. We construct two such datasets, each consisting of train and test sets: (1) Synth-Europarl-ST: translating the English side of the English-German Europarl-ST train and test sets to Marathi. (2) synth-Shrutilipi-ST: translating 100-hr and 10-hr subsets of the Shrutilipi dataset to English, creating a train and test set respectively.
|
113 |
+
|
114 |
+
For finetuning, we randomly sampled 60-hr (30-hr per translation direction) subsets of the train sets of these two datasets. We empirically found that we need more data in English-Marathi compared to English-German, which we hypothesize is due to greater language and domain dissimilarities. For the backtranslation step, we use the union of Voxpopuli and Shrutilipi datasets for the round-trip consistency loss. For evaluation, we use the test sets of these Synth-Europarl-ST (where Marathi is translated from English), and synthShrutilipi-ST datasets, (where English is translated from Marathi). We only evaluate the mr→en translation direction for both. None of the targets in the test sets of either dataset have been seen during pretraining, making them suitable for use.
|
115 |
+
|
116 |
+
## 3.2 Model Configurations
|
117 |
+
|
118 |
+
Table 1 describes training and evaluation datasets for each of our four models. MdeEP is trained and evaluated entirely within the European Parliament domain: it is pretrained on the union of Voxpopuli and Europarl v3, finetuned on Europarl-ST, backtranslated with Voxpopuli, and evaluated on Europarl-ST. MdeCV uses the same pretraining, but is finetuned on CVSS, backtranslated with Common Voice 4.0, and evaluated on CVSS. Common Voice and CVSS consist of crowd-sourced speech recordings reading aloud sentences primarily derived from Wikipedia, which differ from the European Parliament domain. MmrEP and MmrShr are both pretrained and backtranslated with the union of Voxpopuli and Shrutilipi i.e. English European Parliament data and Marathi All India Radio data. MmrEP is finetuned and evaluated on the European Parliament domain using synth-Europarl-ST while MmrShr is finetuned and evaluated on the All India Radio domain using synth-Shrutilipi-ST. All four models are thus finetuned and evaluated with the same dataset's train and test sets.
|
119 |
+
|
120 |
+
2https://cloud.google.com/translate/docs/advanced/batch-translation
|
121 |
+
|
122 |
+
## 3.3 Generating Synthetic Speech Data
|
123 |
+
|
124 |
+
We use single-speaker synthesized speech data for both training and evaluation, following early S2ST work (Jia et al., 2019). All of our training datasets have ground truth transcripts; thus, we use TTS models to regenerate the speech from these transcripts and use the synthesized speech in our experiments. To generate synthetic speech data for English and German, we use Coqui-AI's TTS software.3 These are VITS (Kim et al., 2021)
|
125 |
+
models, a conditional VAE trained with an adversarial learning objective, trained on LJSpeech (Ito & Johnson, 2017) and Thorsten (Müller & Kreutz), each of which contain around 24 hrs of clean read speech. We use IndicTTS (Kumar et al., 2023) model for Marathi; this is a FastPitch (Łańcucki, 2021) model trained on the IndicTTS Database (Baby et al., 2016) that contains around 3 hrs of clean read speech.
|
126 |
+
|
127 |
+
## 4 Model Implementation 4.1 Speech-To-Unit Encoder (S2U)
|
128 |
+
|
129 |
+
We build our speech-to-unit encoder using k-means clustering over the embeddings produced by a selfsupervised speech encoder model. We decide (a) which speech encoder model to use, (b) whether to learn separate per-language k-means models or a joint k-means model and (c) which encoder layer take embeddings from. We measure the average Pointwise Normalized Mutual Information (PNMI) between unit sequences and phoneme sequences extracted from the same datasets, following Hsu et al. (2021), choosing unit sequence that yields higher PNMI. We compare HuBERT (Hsu et al., 2021) and XLSR (Conneau et al., 2020) for English and German, and HuBERT and Indic-wav2vec (Javed et al., 2021) for Marathi for (a); we try all combinations for (b); and we try several layers for (c). To train the k-means models, we use ≈ 50 hrs of raw speech data from each language, obtained from a random subset of Librispeech (Panayotov et al.,
|
130 |
+
2015) for English, Multilingual Librispeech (Pratap et al., 2020) for German, and Shrutilipi (Bhogale et al.,
|
131 |
+
2022) for Marathi. Our best configuration uses a Marathi k-means model (with 100 clusters) and a shared English-German k-means model (with 200 clusters). We find that this works better than training three individual models or a single model, which we hypothesize is due to similarity between English and German.
|
132 |
+
|
133 |
+
For German and English, we use the 6th layer of HuBERT, while for Marathi we use the 8th layer. The details can be found in Appendix C.
|
134 |
+
|
135 |
+
## 4.2 Unit Encoder-Decoder (U2U)
|
136 |
+
|
137 |
+
Preprocessing We train one Sentencepiece BPE tokenizer per LM on the speech units with a 10000-size vocab, using Voxpopuli for the English-German LM and the union of Voxpopuli and Shrutilipi for the English-Marathi LM.
|
138 |
+
|
139 |
+
Pretrain Both LMs are initialized with the mbart-large-50 (Liu et al., 2020) Huggingface checkpoint except the input and output embedding layers, which are reinitialized. The noising function g is defined similarly to mBART; until the number of masked tokens reaches 35%, we sample span length l from a Poisson distribution with mean λ and replace a random contiguous unit sequence of length l with a single MASK
|
140 |
+
token. For English-German model, we pretrain it in several stages, increasing the task difficulty by masking longer spans in later stages. We first train on Voxpopuli for 900k updates with a Poisson lambda of 2. We then train on a combination of Voxpopuli and Europarl-small for 5400k; 2700k updates with Poisson lambda of 2 and 2700k updates with lambda of 8 (harder task due to longer spans). We finally train on a combination of Voxpopuli and Europarl-mid for 2700k updates. For English-Marathi, we only perform a single round, training on a combination of Voxpopuli and Shrutilipi with a Poission lambda of 2 for 900k updates. For both LMs, we use an LR scheduler that starts with an LR of 1e-7, ramps up linearly to 1e-5, and then decays exponentially to 1e-6. We train on 4 GPUs. We use variably sized batches so that shorter sequences can be packed into larger batches; the total number of tokens in a batch is a maximum of 3125 tokens per language for English-German and 6250 tokens per language for English-Marathi, with equal amounts of tokens per language.
|
141 |
+
|
142 |
+
3We use the en/ljspeech/vits model for English and de/thorsten/vits model for German. https://github.com/coqui-ai/
|
143 |
+
TTS)
|
144 |
+
Finetune We use label smoothing, dropout of 0.2 and a learning rate of 3e-5. We train for 40 epochs with a total batch size of 3748 tokens on 4 GPUs. We finetune all parameters of the models except for MdeEP, for which we finetune only the last 5 layers of both encoder and decoder as it shows performance gains.
|
145 |
+
|
146 |
+
Backtranslate When sampling translations during forward translation, we use nucleus sampling (Holtzman et al., 2019) with top-p value of 0.9 and the temperature of 0.5. We use label smoothing of 0.2, learning rate of 3e-5 and train for 3 epochs with a total batch size of 3748 tokens on 4 GPUs.
|
147 |
+
|
148 |
+
## 4.3 Unit-To-Speech Vocoder (U2S)
|
149 |
+
|
150 |
+
A separate vocoder is trained for each language, mapping from the unit vocabulary (size 200 for EnglishGerman, size 100 for Marathi) to speech clips at 16kHz. Using the unit sequences for the Voxpopuli (English and German) and Shrutilipi (Marathi) datasets, generated from our S2U encoder, we train vocoders to generate the speech from these unit sequences. We train across 4 GPUs with a learning rate of 2e − 4 with a batch size of 128 (for en-de) and 240 (for mr) and train for 60k updates; other hyperparameters follow Polyak et al. (2021). As a sanity check, we evaluate S2U and U2S by computing the resynthesis WER, which measures how well passing a given speech signal through S2U and U2S preserves the content of the input speech signal. We find that our models perform comparably to previous models (Lee et al., 2022a). More details about this evaluation are in Appendix D.
|
151 |
+
|
152 |
+
## 5 Results 5.1 Evaluation Setup
|
153 |
+
|
154 |
+
We use the ASR-BLEU evaluation metric following prior work (Lee et al., 2022a;b): given a hypothesis speech translation and a ground truth text translation, we run ASR on the generated speech and compute the BLEU
|
155 |
+
between the ASR transcript and the ground truth text translation with SacreBLEU's default parameters. We evaluate the de→en, en→de and mr→en language directions. We opted to not evaluate the en→mr direction due to poor Marathi ASR models that resulted in excessively noisy ASR-BLEU scores. We generate translations from our models using beam search decoding with a beam size of 10. When evaluating on Europarl-ST dataset, we use wav2vec2.0 based ASR models with greedy decoding (facebook/wav2vec2-large-960h-lv60-self and jonatasgrosman/wav2vec2-xls-r-1b-german) used by prior S2ST work on Europarl-ST (Duquenne et al. (2022); Wang et al. (2022b) and others). When evaluating on CVSS dataset, we use a medium-sized Whisper ASR model used by prior S2ST work on CVSS (Fu et al., 2023). When evaluating Marathi-English translation, we use the facebook/wav2vec2-large-960h-lv60-self ASR model.
|
156 |
+
|
157 |
+
## 5.2 Comparison Systems
|
158 |
+
|
159 |
+
Our results in Tables 2 and 3 compare several speech translation systems. Topline Models We compare our approach to existing models which use **more** resources:
|
160 |
+
- **Speech-to-text (S2T) models trained on large-scale parallel speech-text translation data**.
|
161 |
+
|
162 |
+
⃝a (Iranzo-Sánchez et al., 2019) is an ASR-MT cascade model whose MT component is trained on a large-scale text translation dataset OPUS (Tiedemann, 2012). ⃝b and ⃝c are Transformer-based models from Wang et al. (2021) trained on the union of Europarl-ST and CVSS (total duration 226h) with ⃝c being additionally trained on ≈300h of Voxpopuli aligned speech translation data.
|
163 |
+
|
164 |
+
- **Speech-to-speech translation (S2ST) models trained on large-scale parallel speech-text**
|
165 |
+
translation data. ⃝d is the Translatotron 2 (Jia et al., 2021), a spectrogram-to-spectrogram encoder-synthesizer model trained with text supervision for the decoder with 120h of German-English data and about 360h of aligned data in 3 other X-to-English language pairs.
|
166 |
+
|
167 |
+
- **S2ST models trained without parallel data, but trained on large-scale monolingual**
|
168 |
+
text data. ⃝e is a model by Fu et al. (2023) cascading an unsupervised ASR - unsupervised MT -
|
169 |
+
unsupervised TTS pipeline.
|
170 |
+
|
171 |
+
- Textless speech-to-speech translation (S2ST) models trained on large-scale parallel speech-speech translation data. ⃝f is a bilingual S2ST model trained on a large, mined SpeechMatrix dataset (≈ 2600 hrs of source speech for the en→de and the de→en directions combined)
|
172 |
+
by Duquenne et al. (2022). ⃝g (Kim et al., 2023) is multilingual S2ST model trained on 650h of parallel aligned English-German Voxpopuli data, and about 12k hours of parallel aligned data in 18 other X-to-English language pairs. ⃝h and ⃝o present our pretrained unit LMs fine-tuned on large-scale data i.e. the Europarl-ST train set (110 hours), the CVSS train set (180 hours), the synth-Europarl-ST train set (125h) and the synth-Shrutilipi-ST train set (176h) using the same hyperparameters as our four low-resource models.
|
173 |
+
|
174 |
+
Our Low-Resource Models We train four models on different domains: MdeEP,MdeCV,MmrEP and MmrShr as described in Section 3.2. We evaluate each model with its in-domain evaluation data, i.e., MdeEP
|
175 |
+
model on Europarl-ST, MdeCV model on CVSS, MmrEP on synth-Europarl-ST, and the MmrShr model on synth-Shrutilipi-ST. ⃝i and ⃝p report the model performance after our pretraining and finetuning steps.
|
176 |
+
|
177 |
+
⃝j and ⃝q report the model performance after performing backtranslation.
|
178 |
+
|
179 |
+
## 5.3 Main Results
|
180 |
+
|
181 |
+
We present our results for the English-German pair in Table 2 and the results for the English-Marathi pair in Table 3. Comparing the text-based S2T/S2ST topline models (⃝a -⃝d ) with the textless S2ST topline models
|
182 |
+
(⃝f -⃝h ), we can see that the textless S2ST models, despite being trained with much more data in some cases, underperform the text-based S2T/S2ST models. This showcases the difficulty of learning a textless S2ST
|
183 |
+
model. S2T models also do not suffer from ASR errors introduced at evaluation time, which is required for all other systems that produces speech. Our topline model ⃝h outperforms row ⃝f and row ⃝g for en→de translation despite using much less data, indicating the benefits of pretraining.
|
184 |
+
|
185 |
+
Now, we discuss our models trained on low-resource settings. We can see from rows ⃝i and ⃝p that our pretrained models, given only 20 hr of parallel data (for English-German) and 60 hr of parallel data (for English-Marathi), learn S2ST models with reasonable BLEU scores. Performing backtranslation consistently improves model performance, resulting in our best low-resource models in rows ⃝j and ⃝q . Our de→en Europarl-ST performance and the mr→en synth-Europarl-ST performance is within 1-2 BLEU of our supervised toplines ⃝h and ⃝o despite being trained on much less data. However, our models underperform the textless high-resource (rows ⃝f and ⃝g ) and text-based zero-resource (row ⃝e ) S2ST models overall, leaving room for future work.
|
186 |
+
|
187 |
+
## 5.4 Ablations
|
188 |
+
|
189 |
+
We perform ablations for the MdeEP model evaluated on the Europarl-ST test set to justify our modeling choices.
|
190 |
+
|
191 |
+
Ablating pretraining Our LM is initialized from the text mBART checkpoint, and then trained on a unit-based denoising objective. Without this pretraining (i.e., finetuning and backtranslating with the base mBART checkpoint), as seen in rows ⃝k and ⃝l , we obtain very low ASR-BLEUs less than 2 points. These results suggest that unit LM pretraining is essential in order to learn good S2ST systems in low-resource settings.
|
192 |
+
|
193 |
+
Ablating finetuning We train an unsupervised S2ST model, which is trained with a backtranslation round-trip consistency loss on top of the pretrained unit LM. The result, ⃝m, shows that this does not work, with near-zero BLEU scores. This suggest some amount of parallel speech is necessary.
|
194 |
+
|
195 |
+
Ablating replay in backtranslation We have already seen that adding backtranslation after finetuning boosts performance by 1-2 BLEU, demonstrated by comparing row ⃝i to ⃝j or row ⃝p to ⃝q . We replay the 4In addition to 120h of parallel German-English data, Translatotron 2 is trained on X-to-English translation data from 3 other languages, totalling ≈ 480 hours of parallel data.
|
196 |
+
|
197 |
+
5In addition to 650h of parallel German-English data, UTUT is trained on X-to-English translation data from 18 other languages, totalling ≈ 12000 hours of parallel data.
|
198 |
+
|
199 |
+
| | ASR-BLEU ↑ | | | | |
|
200 |
+
|---------------------------------------------------------|---------------------------------|------|------|------|------|
|
201 |
+
| | Europarl-ST | CVSS | | | |
|
202 |
+
| Model | Parallel #hrs de→en en→de de→en | | | | |
|
203 |
+
| Topline models Text-based High-Resource S2T/S2ST models | | | | | |
|
204 |
+
| ⃝a Cascaded ASR-MT (Iranzo-Sánchez et al., 2019) | N/A | 21.3 | 22.4 | - | |
|
205 |
+
| ⃝b E2E S2T (Wang et al., 2021) | 226h | 17.5 | - | - | |
|
206 |
+
| ⃝c E2E S2T w/ Voxpop-Aligned (Wang et al., 2021) | ≈500h | 18.8 | - | - | |
|
207 |
+
| ⃝d Translatotron 2 (Jia et al., 2021) | 120h 4 | - | - | 19.7 | |
|
208 |
+
| Text-based Zero-Resource S2ST | | | | | |
|
209 |
+
| ⃝e UASR → UMT → UTTS | (Fu et al., 2023) | 0h | - | - | 14.7 |
|
210 |
+
| Textless High-Resource S2ST | | | | | |
|
211 |
+
| ⃝f Bilingual S2S (Duquenne et al., 2022) | ≈2600h | 16.3 | 10.1 | - | |
|
212 |
+
| ⃝g Multilingual UTUT (Kim et al., 2023) | 650h 5 | 15.8 | 9.8 | - | |
|
213 |
+
| ⃝h Pretrain + Fully Finetune (Ours) | 110h|180h | 12.0 | 13.4 | 13.6 | |
|
214 |
+
| Textless Low-Resource S2ST | | | | | |
|
215 |
+
| ⃝i Pretrain + Finetune (Ours) | 20h | 7.8 | 6.8 | 5.8 | |
|
216 |
+
| ⃝j + Backtranslate (Ours) | 20h | 10.0 | 8.3 | 7.7 | |
|
217 |
+
| Ablations Ablating Pretraining | | | | | |
|
218 |
+
| ⃝k Text mBART + Finetune | 20h | 1.0 | 0.3 | - | |
|
219 |
+
| ⃝l + Backtranslate | 20h | 1.3 | 0.4 | - | |
|
220 |
+
| Ablating Finetuning | | | | | |
|
221 |
+
| ⃝m Pretrain + Backtranslate | 0h | 0.4 | 0.1 | - | |
|
222 |
+
| Ablating Backtranslation Replay | | | | | |
|
223 |
+
| ⃝n Pretrain + Finetune + Backtranslate w/o replay | 20h | 4.3 | 4.0 | - | |
|
224 |
+
|
225 |
+
Table 2: English-German S2ST evaluation using the ASR-BLEU metric on Europarl-ST (Iranzo-Sánchez et al., 2019) and CVSS (Jia et al., 2022) test sets; higher is better. Topline models have either been trained on high-resource supervised datasets, or are not textless due to use of intermediate text generation; see Section 5 for discussions. The Parallel \#hrs column denotes the number of hours of parallel translation training data.
|
226 |
+
|
227 |
+
In ⃝h it denotes 110h of EP-ST data and 180h of CVSS data is used to train two separate topline models.
|
228 |
+
|
229 |
+
supervised low-resource finetuning data during backtranslation to stabilize training. We ablate training with this replay by running the backtranslation step with just the round-trip consistency loss. The result, row ⃝n , shows that the performance worsens compared to the initialization of row ⃝i . With replay, however, we get the results in row (j), which improve upon the initialization.
|
230 |
+
|
231 |
+
## 6 Related Work 6.1 Speech-To-Speech Translation (S2St)
|
232 |
+
|
233 |
+
While cascaded S2ST models (Nakamura et al., 2006; Wahlster, 2000) that generate intermediate text translations (either as an ASR-MT-TTS or an S2T-TTS cascade) have existed for a long time, end-to-end S2ST models can be traced back to Jia et al. (2019) who trained a model that directly translates source language speech waveforms to speech waveforms in the target language. While most S2ST systems directly predict speech waveforms at inference time, some S2ST models (Jia et al., 2019; 2021; Lee et al., 2022a; Inaguma et al., 2022) are text-based i.e. they opt to use textual supervision during training to stabilize system components or to obtain improved performance, while other S2ST models (Lee et al., 2022b; Li
|
234 |
+
|
235 |
+
| ASR-BLEU ↑ | | | |
|
236 |
+
|--------------------------------------------|---------------|-------|-------|
|
237 |
+
| synth-EP-ST synth-Shr-ST | | | |
|
238 |
+
| Model | Parallel #hrs | mr→en | mr→en |
|
239 |
+
| Topline models Textless High-Resource S2ST | | | |
|
240 |
+
| ⃝o Pretrain + Finetune (Full) (Ours) | 125h|176h | 10.9 | 17.8 |
|
241 |
+
| Textless Low-Resource S2ST | | | |
|
242 |
+
| ⃝p Pretrain + Finetune (Ours) | 60h | 8.3 | 9.6 |
|
243 |
+
| ⃝q + Backtranslation (Ours) | 60h | 9.2 | 10.0 |
|
244 |
+
|
245 |
+
Table 3: Marathi-English S2ST evaluation using the ASR-BLEU metric on our synth-Europarl-ST
|
246 |
+
and synth-Shrutilipi-ST test sets; higher is better. Topline models have been trained on high-resource supervised datasets; see Section 5 for discussions. The Parallel \#hrs column denotes the number of hours of parallel translation training data. In ⃝o it denotes 125h of synth-Europarl-ST data and 176h of synth-Shrutilipi-ST data is used to train two separate topline models.
|
247 |
+
|
248 |
+
et al., 2022; Kim et al., 2023; Zhu et al., 2023) are trained in a textless manner, representing speech using self-supervised speech units, potentially paving the way to extend S2ST technology to hundreds of languages that are primarily spoken or have very bad ASR systems. Most of these S2ST models, especially the textless ones, require large training datasets of parallel speech data, where each input utterance is paired with a spoken form of its translation in the target language. In order to reduce this dependency on parallel data, unsupervised S2ST systems (Wang et al., 2022b; Fu et al.,
|
249 |
+
2023; Nachmani et al., 2023) that do not use any parallel data at all have been recently proposed. However, none of them are textless; these approaches either train non-end-to-end cascaded S2ST models (ASR-MT-TTS)
|
250 |
+
in an unsupervised manner using unsupervised ASR (Liu et al., 2022b), unsupervised text-based MT (Liu et al., 2020) and unsupervised TTS (Liu et al., 2022a), or use text supervision during training (Nachmani et al., 2023). Thus, the crucial cross-lingual translation model is learned over text tokens, which limits their applicability to spoken languages.
|
251 |
+
|
252 |
+
Thus, existing S2ST work falls into two buckets: high-resource textless S2ST, and zero-resource text-based S2ST. Our work aims to bridge these two buckets by proposing a textless, low-resource S2ST model, which can be applied to spoken/unwritten languages without needing a lot of parallel speech translation data.
|
253 |
+
|
254 |
+
## 6.2 Textless And Unit-Based Nlp
|
255 |
+
|
256 |
+
While we tackle textless S2ST, textless speech processing has studied in other tasks such as spoken language modeling (Borsos et al., 2022; Lakhotia et al., 2021; Hassid et al., 2024), emotion conversion (Kreuk et al.,
|
257 |
+
2021), image-speech retrieval (Harwath et al., 2016; Peng & Harwath, 2022), spoken question answering (Lin et al., 2022) and speech evaluation (Chen et al., 2022; Besacier et al., 2023). Furthermore, progress in several other speech tasks like TTS (Wang et al., 2023) that involve both speech and text has been achieved by using powerful self-supervised units (semantic units like HuBERT (Hsu et al., 2021) and acoustic units like EnCodec (Défossez et al., 2022)).
|
258 |
+
|
259 |
+
## 7 Conclusion
|
260 |
+
|
261 |
+
We present the first textless low-resource speech-to-speech translation system, capable of learning from dozens of hours of parallel translation data, built by pretraining, finetuning, and backtranslating a language model over self-supervised speech unit sequences rather than text. We demonstrate its efficacy on 2 language pairs (English-German and English-Marathi) across 3 different domains. While our models achieve a decent translation performance, close to supervised toplines in some cases, they still underperform models trained on far more data or models that make use of text data, implying that several challenges still remain to make these models highly performant. However, our approach holds great promise for modelling low-resource, primarily spoken languages. We hypothesize, based on similar findings for text machine translation, that scaling our approach to a larger unit LM pretrained on more data will improve performance and may unlock unsupervised textless S2ST akin to unsupervised text MT (Liu et al., 2020). Future work can investigate use of better S2U unit encoders for training better unit LMs, and training unit LMs on a larger set of languages.
|
262 |
+
|
263 |
+
## References
|
264 |
+
|
265 |
+
Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. Common voice: A massively-multilingual speech corpus, 2020.
|
266 |
+
|
267 |
+
Arun Baby, Anju Leela Thomas, NL Nishanthi, TTS Consortium, et al. Resources for indian languages. In Proceedings of Text, Speech and Dialogue, 2016.
|
268 |
+
|
269 |
+
Laurent Besacier, Swen Ribeiro, Olivier Galibert, and Ioan Calapodescu. A textless metric for speech-to-speech comparison, 2023.
|
270 |
+
|
271 |
+
Kaushal Santosh Bhogale, Abhigyan Raman, Tahir Javed, Sumanth Doddapaneni, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. Effectiveness of mining audio and text pairs from public data for improving asr systems for low-resource languages, 2022.
|
272 |
+
|
273 |
+
Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. Audiolm: a language modeling approach to audio generation, 2022. URL https://arxiv.org/abs/2209.03143.
|
274 |
+
|
275 |
+
Mingda Chen, Paul-Ambroise Duquenne, Pierre Andrews, Justine Kao, Alexandre Mourachko, Holger Schwenk, and Marta R. Costa-jussà. Blaser: A text-free speech-to-speech translation evaluation metric, 2022. URL https://arxiv.org/abs/2212.08486.
|
276 |
+
|
277 |
+
Yu-An Chung, Yu Zhang, Wei Han, Chung-Cheng Chiu, James Qin, Ruoming Pang, and Yonghui Wu.
|
278 |
+
|
279 |
+
W2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training, 2021.
|
280 |
+
|
281 |
+
Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. Unsupervised cross-lingual representation learning for speech recognition, 2020.
|
282 |
+
|
283 |
+
Paul-Ambroise Duquenne, Hongyu Gong, Ning Dong, Jingfei Du, Ann Lee, Vedanuj Goswani, Changhan Wang, Juan Pino, Benoît Sagot, and Holger Schwenk. Speechmatrix: A large-scale mined corpus of multilingual speech-to-speech translations, 2022.
|
284 |
+
|
285 |
+
Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression, 2022.
|
286 |
+
|
287 |
+
Yu-Kuan Fu, Liang-Hsuan Tseng, Jiatong Shi, Chen-An Li, Tsu-Yuan Hsu, Shinji Watanabe, and Hung yi Lee. Improving cascaded unsupervised speech translation with denoising back-translation, 2023.
|
288 |
+
|
289 |
+
Barry Haddow, Rachel Bawden, Antonio Valerio Miceli Barone, Jindřich Helcl, and Alexandra Birch. Survey of low-resource machine translation. *Computational Linguistics*, 48(3):673–732, September 2022. doi:
|
290 |
+
10.1162/coli_a_00446. URL https://aclanthology.org/2022.cl-3.6.
|
291 |
+
|
292 |
+
David F. Harwath, A. Torralba, and James R. Glass. Unsupervised learning of spoken language with visual context. In *NIPS*, 2016.
|
293 |
+
|
294 |
+
Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, and Yossi Adi. Textually pretrained speech language models, 2024.
|
295 |
+
|
296 |
+
Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration.
|
297 |
+
|
298 |
+
ArXiv, abs/1904.09751, 2019.
|
299 |
+
|
300 |
+
Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. *IEEE/ACM Transactions on Audio, Speech, and Language Processing*, 29:3451–3460, 2021.
|
301 |
+
|
302 |
+
Hirofumi Inaguma, Sravya Popuri, Ilia Kulikov, Peng-Jen Chen, Changhan Wang, Yu-An Chung, Yun Tang, Ann Lee, Shinji Watanabe, and Juan Pino. Unity: Two-pass direct speech-to-speech translation with discrete units, 2022. URL https://arxiv.org/abs/2212.08055.
|
303 |
+
|
304 |
+
Javier Iranzo-Sánchez, Joan Albert Silvestre-Cerdà, Javier Jorge, Nahuel Roselló, Adrià Giménez, Albert Sanchis, Jorge Civera, and Alfons Juan. Europarl-st: A multilingual corpus for speech translation of parliamentary debates, 2019. URL https://arxiv.org/abs/1911.03167.
|
305 |
+
|
306 |
+
Keith Ito and Linda Johnson. The lj speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017.
|
307 |
+
|
308 |
+
Tahir Javed, Sumanth Doddapaneni, Abhigyan Raman, Kaushal Santosh Bhogale, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. Towards building asr systems for the next billion users, 2021.
|
309 |
+
|
310 |
+
Ye Jia, Ron J. Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. Direct speech-to-speech translation with a sequence-to-sequence model. In *Interspeech*, 2019. URL
|
311 |
+
https://arxiv.org/abs/1904.06037.
|
312 |
+
|
313 |
+
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. Translatotron 2: High-quality direct speech-to-speech translation with voice preservation, 2021.
|
314 |
+
|
315 |
+
Ye Jia, Michelle Tadmor Ramanovich, Quan Wang, and Heiga Zen. CVSS corpus and massively multilingual speech-to-speech translation. In *Proceedings of Language Resources and Evaluation Conference (LREC)*,
|
316 |
+
pp. 6691–6703, 2022.
|
317 |
+
|
318 |
+
Jaehyeon Kim, Jungil Kong, and Juhee Son. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech, 2021.
|
319 |
+
|
320 |
+
Minsu Kim, Jeongsoo Choi, Dahun Kim, and Yong Man Ro. Many-to-many spoken language translation via unified speech and text representation learning with unit-to-unit translation, 2023.
|
321 |
+
|
322 |
+
Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In *Proceedings of Machine* Translation Summit X: Papers, pp. 79–86, Phuket, Thailand, September 13-15 2005. URL https://
|
323 |
+
aclanthology.org/2005.mtsummit-papers.11.
|
324 |
+
|
325 |
+
Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis, 2020.
|
326 |
+
|
327 |
+
Felix Kreuk, Adam Polyak, Jade Copet, Eugene Kharitonov, Tu-Anh Nguyen, Morgane Rivière, Wei-Ning Hsu, Abdelrahman Mohamed, Emmanuel Dupoux, and Yossi Adi. Textless speech emotion conversion using discrete and decomposed representations, 2021. URL https://arxiv.org/abs/2111.07402.
|
328 |
+
|
329 |
+
Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. *ArXiv*, abs/1808.06226, 2018.
|
330 |
+
|
331 |
+
Gokul Karthik Kumar, Praveen S V au2, Pratyush Kumar, Mitesh M. Khapra, and Karthik Nandakumar.
|
332 |
+
|
333 |
+
Towards building text-to-speech systems for the next billion users, 2023.
|
334 |
+
|
335 |
+
Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu Anh Nguyen, Jade Copet, Alexei Baevski, Adelrahman Mohamed, and Emmanuel Dupoux. Generative spoken language modeling from raw audio. *CoRR*, 2021. URL https://arxiv.org/abs/2102.01192.
|
336 |
+
|
337 |
+
Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. *ArXiv*, abs/1711.00043, 2018.
|
338 |
+
|
339 |
+
Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, Juan Pino, and Wei-Ning Hsu. Direct speech-to-speech translation with discrete units.
|
340 |
+
|
341 |
+
In *Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:*
|
342 |
+
Long Papers), 2022a.
|
343 |
+
|
344 |
+
Ann Lee, Hongyu Gong, Paul-Ambroise Duquenne, Holger Schwenk, Peng-Jen Chen, Changhan Wang, Sravya Popuri, Yossi Adi, Juan Pino, Jiatao Gu, and Wei-Ning Hsu. Textless speech-to-speech translation on real data. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022b.
|
345 |
+
|
346 |
+
M. Paul Lewis, Gary F. Simon, and Charles D. Fennig. Ethnologue: Languages of the World, Nineteenth edition. SIL International. Online version: http://www.ethnologue.com, 2016.
|
347 |
+
|
348 |
+
Xinjian Li, Ye Jia, and Chung-Cheng Chiu. Textless direct speech-to-speech translation with discrete speech representation, 2022.
|
349 |
+
|
350 |
+
Guan-Ting Lin, Yung-Sung Chuang, Ho-Lam Chung, Shu wen Yang, Hsuan-Jui Chen, Shuyan Dong, ShangWen Li, Abdelrahman Mohamed, Hung yi Lee, and Lin shan Lee. Dual: Discrete spoken unit adaptive learning for textless spoken question answering, 2022.
|
351 |
+
|
352 |
+
Alexander Liu, Cheng-I Lai, Wei-Ning Hsu, Michael Auli, Alexei Baevski, and James Glass. Simple and effective unsupervised speech synthesis. In *INTERSPEECH*, 2022a.
|
353 |
+
|
354 |
+
Alexander H. Liu, Wei-Ning Hsu, Michael Auli, and Alexei Baevski. Towards end-to-end unsupervised speech recognition, 2022b.
|
355 |
+
|
356 |
+
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742, 2020. doi: 10.1162/tacl_a_00343. URL
|
357 |
+
https://aclanthology.org/2020.tacl-1.47.
|
358 |
+
|
359 |
+
Thorsten Müller and Dominik Kreutz. Thorsten-Voice. URL https://github.com/thorstenMueller/
|
360 |
+
Thorsten-Voice.
|
361 |
+
|
362 |
+
Eliya Nachmani, Alon Levkovitch, Yifan Ding, Chulayuth Asawaroengchai, Heiga Zen, and Michelle Tadmor Ramanovich. Translatotron 3: Speech to speech translation with monolingual data, 2023.
|
363 |
+
|
364 |
+
S. Nakamura, K. Markov, H. Nakaiwa, G. Kikui, H. Kawai, T. Jitsuhiro, J.-S. Zhang, H. Yamamoto, E. Sumita, and S. Yamamoto. The atr multilingual speech-to-speech translation system. *IEEE Transactions on Audio,*
|
365 |
+
Speech, and Language Processing, 14(2):365–376, 2006. doi: 10.1109/TSA.2005.860774.
|
366 |
+
|
367 |
+
Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: An asr corpus based on public domain audio books. In *2015 IEEE International Conference on Acoustics, Speech and Signal* Processing (ICASSP), pp. 5206–5210, 2015. doi: 10.1109/ICASSP.2015.7178964.
|
368 |
+
|
369 |
+
Ankita Pasad, Ju-Chieh Chou, and Karen Livescu. Layer-wise analysis of a self-supervised speech representation model. In *ASRU*, 2021.
|
370 |
+
|
371 |
+
Puyuan Peng and David Harwath. Fast-slow transformer for visually grounding speech. In *ICASSP*, 2022.
|
372 |
+
|
373 |
+
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. Speech Resynthesis from Discrete Disentangled Self-Supervised Representations. In *Proc. Interspeech 2021*, 2021.
|
374 |
+
|
375 |
+
Vineel Pratap, Qiantong Xu, Anuroop Sriram, Gabriel Synnaeve, and Ronan Collobert. MLS: A large-scale multilingual dataset for speech research. In *Interspeech 2020*. ISCA, oct 2020. doi: 10.21437/interspeech.
|
376 |
+
|
377 |
+
2020-2826. URL https://doi.org/10.21437%2Finterspeech.2020-2826.
|
378 |
+
|
379 |
+
Seamless Communication, Loïc Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-jussà, Onur, Celebi, Maha Elbayad, Cynthia Gao, Francisco Guzmán, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, and Skyler Wang. SeamlessM4T—Massively Multilingual
|
380 |
+
& Multimodal Machine Translation. *ArXiv*, 2023.
|
381 |
+
|
382 |
+
Jörg Tiedemann. Parallel data, tools and interfaces in OPUS. In Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Mehmet Uğur Doğan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (eds.), *Proceedings of the Eighth International Conference on Language Resources* and Evaluation (LREC'12), pp. 2214–2218, Istanbul, Turkey, May 2012. European Language Resources Association (ELRA). URL http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf.
|
383 |
+
|
384 |
+
Wolfgang Wahlster. Verbmobil: Foundations of speech-to-speech translation. In *Artificial Intelligence*, 2000.
|
385 |
+
|
386 |
+
URL https://api.semanticscholar.org/CorpusID:265678893.
|
387 |
+
|
388 |
+
Changhan Wang, Morgane Riviere, Ann Lee, Anne Wu, Chaitanya Talnikar, Daniel Haziza, Mary Williamson, Juan Pino, and Emmanuel Dupoux. VoxPopuli: A large-scale multilingual speech corpus for representation learning, semi-supervised learning and interpretation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 993–1003, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.80. URL https://aclanthology.org/2021.acl-long.80.
|
389 |
+
|
390 |
+
Changhan Wang, Hirofumi Inaguma, Peng-Jen Chen, Ilia Kulikov, Yun Tang, Wei-Ning Hsu, Michael Auli, and Juan Pino. Simple and effective unsupervised speech translation, 2022a. URL https://arxiv.org/ abs/2210.10191.
|
391 |
+
|
392 |
+
Changhan Wang, Hirofumi Inaguma, Peng-Jen Chen, Ilia Kulikov, Yun Tang, Wei-Ning Hsu, Michael Auli, and Juan Pino. Simple and Effective Unsupervised Speech Translation, 2022b.
|
393 |
+
|
394 |
+
Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. Neural codec language models are zero-shot text to speech synthesizers, 2023.
|
395 |
+
|
396 |
+
Yongxin Zhu, Zhujin Gao, Xinyuan Zhou, Zhongyi Ye, and Linli Xu. Diffs2ut: A semantic preserving diffusion model for textless direct speech-to-speech translation, 2023.
|
397 |
+
|
398 |
+
Adrian Łańcucki. Fastpitch: Parallel text-to-speech with pitch prediction, 2021.
|
399 |
+
|
400 |
+
## A Datasets
|
401 |
+
|
402 |
+
| Module | Dataset | Duration | Lang |
|
403 |
+
|---------------------------------|------------------|--------------|--------------|
|
404 |
+
| S2U Encoder: Pretraining | Librispeech | 960h | en |
|
405 |
+
| S2U Encoder: k-means Clustering | Librispeech, MLS | 48h, 48h | en, de |
|
406 |
+
| Shrutilipi | 100h | mr | |
|
407 |
+
| U2U Pretraining | Voxpopuli | 529h, 248h | en, de |
|
408 |
+
| Europarl-small | 811h, 975h | en, de | |
|
409 |
+
| Europarl-mid | 2463h, 2918h | en, de | |
|
410 |
+
| Shrutilipi | 1000h | mr | |
|
411 |
+
| U2U Finetuning (Toplines) | Europarl-ST | 83h,27h | en→de, de→en |
|
412 |
+
| CVSS | 91h,88h | en→de, de→en | |
|
413 |
+
| synth-EP-ST | 83h,42h | en→mr, mr→en | |
|
414 |
+
| synth-Shr-ST | 76h,100h | en→mr, mr→en | |
|
415 |
+
| U2U Finetuning (Low-Resource) | Europarl-ST | 10h,10h | en→de, de→en |
|
416 |
+
| CVSS | 10h,10h | en→de, de→en | |
|
417 |
+
| synth-EP-ST | 30h,30h | en→mr, mr→en | |
|
418 |
+
| synth-Shr-ST | 30h,30h | en→mr, mr→en | |
|
419 |
+
| Voxpopuli | 529h, 248h | en, de | |
|
420 |
+
| U2U Backtranslation | Common Voice | 294h, 89h | en, de |
|
421 |
+
| Shrutilipi | 1000h | mr | |
|
422 |
+
| U2S Vocoder | Voxpopuli | 529h, 248h | en, de |
|
423 |
+
| Shrutilipi | 1000h | mr | |
|
424 |
+
| Evaluation | Europarl-ST | 3h,6h | en→de, de→en |
|
425 |
+
| CVSS | 15h | de→en | |
|
426 |
+
| synth-EP-ST | 9h | mr→en | |
|
427 |
+
| synth-Shr-ST | 10h | mr→en | |
|
428 |
+
|
429 |
+
Table 4: Summary of datasets used to develop our system, with datasets used by base pretrained models colored red. Datasets in the U2U Finetune and U2U Evaluation sections are parallel translation datasets, and we report duration statistics for both translation directions separately, the duration being that of the source speech.
|
430 |
+
|
431 |
+
## B Compute Details
|
432 |
+
|
433 |
+
We train all our models on 4 NVIDIA A40s (often using 2 GPUs with gradient accumulation of 2, or 1 GPU
|
434 |
+
with gradient accumulation of 1, which is equivalent to 4 GPUs).
|
435 |
+
|
436 |
+
## C S2U Encoder Ablations
|
437 |
+
|
438 |
+
To obtain the phoneme sequences for English and German, we use English and German phonemizers from the Montreal Forced Aligner6. For Marathi, we use a Kaldi-based ASR model trained on Shrutilipi data. First, we describe our ablations for English-German. We experiment with different base speech models (HuBERT
|
439 |
+
vs. XLSR), layer indices, number of clusters (100 vs. 200) and types of clusterings (one clustering for both languages jointly v.s. separate clusterings) and choose the configuration that achieves the highest PNMI. We report PNMI results for some configurations in Figure 3.
|
440 |
+
|
441 |
+
6https://montreal-forced-aligner.readthedocs.io/en/latest/
|
442 |
+
|
443 |
+
![15_image_0.png](15_image_0.png)
|
444 |
+
|
445 |
+
(c) 100 monolingual vs. 200 mixed units, evaluated on German data
|
446 |
+
|
447 |
+
![15_image_1.png](15_image_1.png)
|
448 |
+
|
449 |
+
(d) 100 monolingual vs. 200 mixed units, evaluated on English data Figure 3: PNMI vs. layer index, comparing different clustering settings for English and German. Higher is better.
|
450 |
+
|
451 |
+
![15_image_2.png](15_image_2.png)
|
452 |
+
|
453 |
+
Figure 4: PNMI with HuBERT and Indic wav2vec2.0 evaluated on Shrutilipi, computed for different layer indices, for Marathi. Higher is better.
|
454 |
+
For Marathi, we experiment with different base speech models (HuBERT vs Indic-wav2vec2.0 (Javed et al.,
|
455 |
+
2021)) and layer indices. We fix the number of clusters at 100. We choose the configuration that achieves the highest PNMI. We report PNMI results for some configurations in Figure 4.
|
456 |
+
|
457 |
+
| Method | en Voxpopuli | de Voxpopuli | en LJSpeech |
|
458 |
+
|---------------------|----------------|----------------|---------------|
|
459 |
+
| Ground Truth | 4.89 | 8.44 | 3.80 |
|
460 |
+
| (Lee et al., 2022a) | 10.56 | - | 7.69 |
|
461 |
+
| Ours | 8.53 | 19.46 | 6.72 |
|
462 |
+
|
463 |
+
Table 5: S2U + U2S resynthesis performance; WER computed between resynthesized speech transcribed by ASR model and ground truth transcripts. Lower WER is better. We also include the ground-truth speech WER as a lower bound.
|
464 |
+
|
465 |
+
## D S2U + U2S Resynthesis Evaluation
|
466 |
+
|
467 |
+
We compute the resynthesis WER as follows: (1) pass input speech to the S2U encoder and generate the unit sequence, (2) pass the generated unit sequence to our U2S vocoder to synthesize speech, (3) transcribe the synthesized speech using ASR (4) compute the Word Error Rate between the transcript and the ground truth transcript of the input speech. To account for the errors from ASR, we compute the WER between the ASR transcript of the input speech utterance ('ground-truth' speech) and the ground truth transcript as a lower bound. We use test sets from English and German Voxpopuli (Wang et al., 2021) and English LJSpeech (Ito & Johnson, 2017) with our synthetic single-speaker speech. Table 5 presents these results. We find that the resynthesis WERs are fairly good for English, and worse for German. Based on qualitative analysis of the German input speech (which is already single-speaker synthetic speech) and resynthesized speech (passed through S2U and U2S), we find that the input speech itself makes stress and pronunciation errors, driving up the Ground Truth WER, which further cascades into the model resynthesis WER. We still use this model because it is the best we could build with existing tools.
|
468 |
+
|
469 |
+
## E Example Outputs
|
470 |
+
|
471 |
+
We present example outputs from our models. First, we showcase 10 cherry-picked examples, 2 examples from each evaluated language pair and domain in Table 6. Our best models, the post-backtranslation models (rows
|
472 |
+
⃝j and ⃝q in Tables 2 and 3) perform well on these examples. We present the ground-truth transcripts of the source and target utterances, the ASR transcript of the target utterance predicted by the pre-backtranslation finetuned models (rows ⃝i and ⃝p in Tables 2 and 3) and the ASR transcript of the target utterance predicted by our best models, the post-backtranslation models. We can observe that our post-backtranslation models are able to nearly perfectly translate these cherry-picked examples, which can be categorized into examples with (a) no mistakes (rows 1, 5, 7, 9), (b) valid replacements that largely preserve sentence meaning (rows 2, 4, 8) and (c) minor pronunciation errors (rows 6, 10). On the other hand, predictions from the finetuned model are overall worse, categorized into (a) no mistakes (row 1), (b) valid meaning-preserving replacements
|
473 |
+
(row 2), (c) large meaning changes (row 3, 4, 7, 9, 10) and (d) incoherent output (row 5, 6, 8).
|
474 |
+
|
475 |
+
We also sample 5 randomly-picked examples, one from each setting to again compare our pre-backtranslation finetuned models and our best post-backtranslation models in Table 7. The examples show that the models are getting several of the words and semantics right, but often mistranslate certain words and make egregious grammatical and language modelling mistakes. We can see that our post-backtranslation model is overall better than the finetuned model for English-German in row (1), (2), worse in row (3), and performs similarly for rows (4) and (5).
|
476 |
+
|
477 |
+
| Source Utterance | Target Utterance (Gold) | Prediction | from | fine | | | |
|
478 |
+
|-------------------------------|---------------------------------|------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|-----------------------------|-----------------------------|-----------------------------|-----------------------------|
|
479 |
+
| | tuned model | Prediction | from | post | | | |
|
480 |
+
| | backtranslation model | | | | | | |
|
481 |
+
| en→de (Europarl-ST) | | | | | | | |
|
482 |
+
| (1) | you can take initiatives | sie können initiativen ergreifen | sie können initiativen ergreifen | sie können initiativen ergreifen | | | |
|
483 |
+
| (2) | madam | president | i | frau präsidentin ich habe | frau präsidentin ich unterstütze diesen bericht | frau präsidentin ich habe | |
|
484 |
+
| supported this report | diesen bericht unterstützt | diesen bericht gestimmt | | | | | |
|
485 |
+
| de→en (Europarl-ST) | | | | | | | |
|
486 |
+
| (3) | ich denke da sind wir auf | i think we are on the right | i think we should be aware | i think we are on the right | | | |
|
487 |
+
| dem richtigen weg | track here | of this | track | | | | |
|
488 |
+
| (4) | ich denke es ist klar dass die bürger und bürgerinnen der europäischen union diese steuer wollen und ich denke dass es eine große verantwortung ist | i think it is clear that the citizens of the european union want this tax and i think we have a great responsibility here | i think it is clear that the citizens of the european union want to be taxed and i think it is a major responsibility | | | | |
|
489 |
+
| | i think that it is clear that the citizens of the european union want to do with these tasks and to do with the european union what it wants to do | | | | | | |
|
490 |
+
| de→en (CVSS) | | | | | | | |
|
491 |
+
| (5) | stellst du die musik bitte | are you turning the volume | are you turning the music | are you turning the volume | | | |
|
492 |
+
| auf zimmerlautstärke albert | albert towards its mountain | | | | | | |
|
493 |
+
| down to room volume albert his mother screamed | down to room volume albert his mother screamed | | | | | | |
|
494 |
+
| rief seine mutter | rock loosen hot air line at the | rose angeles is located on | | | | | |
|
495 |
+
| (6) | los angeles liegt an der westküste | los angeles is located on the west coast | west coast | the west coast | | | |
|
496 |
+
| mr→en (s-Ep-ST) | | | | | | | |
|
497 |
+
| (7) | yA | kArZA\m | mF | yA | ah | | |
|
498 |
+
| vAlAQyA bAjn mt dU fkt nAhF | for these reasons i cannot | for this reason i am in | for these reasons i cannot | | | | |
|
499 |
+
| vote in favour of this report | favour of the report | vote in favour of this report | | | | | |
|
500 |
+
| (8) | t | aADFc | | | | | |
|
501 |
+
| sDAErt kl gl aAh | pr\t | | | | | | |
|
502 |
+
| aAZKF | kAm | krZ | aAv[yk | | | | |
|
503 |
+
| aAh | it | has | already | it is improving barrowness | it | has | already |
|
504 |
+
| been modified | but | more | improving but it must be | made improvements | but | | |
|
505 |
+
| work needs to be done | forgotten | more work needs to be done | | | | | |
|
506 |
+
| mr→en (s-Shr-ST) | | | | | | | |
|
507 |
+
| (9) | p\ccAFs | vqA vrQyA | svAnF | all those above forty five | more than forty five years | all those above forty five | |
|
508 |
+
| lsFkrZ av[y kzn ]yA | years must get vaccinated | of vaccination papers | years must get vaccinated | | | | |
|
509 |
+
| (10) | t | kAl | m\b{it | bAtmFdArA\fF | he was talking to reporters | he was talking to reporters | he was talking to reporters |
|
510 |
+
| in mumbai yesterday | in mabay to day | in mumba yesterday | | | | | |
|
511 |
+
| bolt hot | | | | | | | |
|
512 |
+
|
513 |
+
Table 6: Cherry-picked examples picked for our best S2ST models (the post-backtranslation models), reporting predictions for both finetuned and post-backtranslation models. We manually annotate the differences between the gold utterance and the prediction from the post-backtranslation model, align them to the source utterance and underline the differences.
|
514 |
+
|
515 |
+
| Source Utterance | Target Utterance (Gold) | Prediction | from | fine | |
|
516 |
+
|---------------------------|--------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------|----------------------------|----|
|
517 |
+
| | | tuned model | Prediction | from | post |
|
518 |
+
| | | backtranslation model | | | |
|
519 |
+
| en→de (Europarl-ST) | | | | | |
|
520 |
+
| (1) | goods and cargo have been delayed or not transported at all and businesses both large and small have been affected | kosovo und konsum wurden zerstört oder wurden nicht erwähnt oder angemessen sein können | | | |
|
521 |
+
| | waren und güterlieferungen wurden verschoben oder ganz gestoppt und sowohl kleine als auch große unternehmen sind betroffen | günstige und kunden wurden im vorle von kmos nicht erwähnt oder noch nicht erwähnt von allen unternehmen großen unternehmen | | | |
|
522 |
+
| de→en (Europarl-ST) | | | | | |
|
523 |
+
| (2) | wir sollten hier nicht mit | we must not apply double | we should not do so with | we should not be here with | |
|
524 |
+
| zweierlei maß messen | standards here | these matters | the two sides | | |
|
525 |
+
| de→en (CVSS) | their sounding board has | year study teacher however | child dictatorial territorial | | |
|
526 |
+
| (3) | ihr schalldeckel trägt herabhängende quasten und ist | loose hanging tassels and is | remaining costs and an ice | castes and is managed by a | |
|
527 |
+
| mit einem pelikan bekrönt | crowned with a pelican | and hobbies | pellikov | | |
|
528 |
+
| mr→en (s-Ep-ST) | | | | | |
|
529 |
+
| (4) | n{sEg k s\sADn aAEZ EnsgA c s\r"Z kryAsAWF aApSyAlA pyA vrZ s\r"ZAQyA "/At s\vAdAcF aAv[yktA aAh | we need dialogue in the field of environmental protection in order to conserve natural resources and nature | we need collectively in the area of protection resources for natural resources and jobs | | |
|
530 |
+
| | | in order to protect natural resources and defense quality basis we need a clear signal of environmental protection in the last few days ero people who have done in mumba mumbai soon reins have done in the last few days in the last few days mumbai | in mumba and opportunities of mumba and mumba who have received water in seventeen t h needs water in the last few days by the water in the mumbai | | |
|
531 |
+
| mr→en (s-Shr-ST) | | | | | |
|
532 |
+
| (5) | m\b{i aAEZ upngrA\m@y gSyA kAhF EdvsA\t jordAr pAUs JASyAm\ sAt mHy tlAvA\QyA pAyAt l"ZFy vAY JASyAn\ m\b{ilA pYFl bArA mEhn pAZF prvWA srFtpZ hoU fkZAr aAh | heavy rains in mumbai and its suburbs in the last few days have significantly increased the water level in the seven main lakes ensuring smooth water supply to mumbai for the next twelve months | | | |
|
533 |
+
|
534 |
+
Table 7: Randomly sampled examples comparing our finetuned and post-backtranslation models.
|
zTNVjQXZyx/zTNVjQXZyx_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 19,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 1,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 1,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 18,
|
14 |
+
"code": 0,
|
15 |
+
"table": 8,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 0,
|
18 |
+
"unsuccessful_ocr": 0,
|
19 |
+
"equations": 0
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|