paper
stringlengths 24
111
| paper_id
stringlengths 10
10
| table_caption
stringlengths 3
663
| table_column_names
sequencelengths 2
14
| table_content_values
sequencelengths 1
49
| text
stringlengths 116
2.01k
| full_body_text
stringlengths 19.3k
104k
|
---|---|---|---|---|---|---|
Neural Belief Tracker: Data-Driven Dialogue State Tracking | 1606.03777 | Table 2: DSTC2 and WOZ 2.0 test set performance (joint goals and requests) of the NBT-CNN model making use of three different word vector collections. The asterisk indicates statistically significant improvement over the baseline xavier (random) word vectors (paired t-test; p<0.05). | [
"[BOLD] Word Vectors",
"[BOLD] DSTC2 [BOLD] Goals",
"[BOLD] DSTC2 [BOLD] Requests",
"[BOLD] WOZ 2.0 [BOLD] Goals",
"[BOLD] WOZ 2.0 [BOLD] Requests"
] | [
[
"xavier [BOLD] (No Info.)",
"64.2",
"81.2",
"81.2",
"90.7"
],
[
"[BOLD] GloVe",
"69.0*",
"96.4*",
"80.1",
"91.4"
],
[
"[BOLD] Paragram-SL999",
"[BOLD] 73.4*",
"[BOLD] 96.5*",
"[BOLD] 84.2*",
"[BOLD] 91.6"
]
] | The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. , trained using co-occurrence information in large textual corpora; and 3) semantically specialised Paragram-SL999 vectors Wieting et al. Paragram-SL999 vectors (significantly) outperformed GloVe and xavier vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g. north and south, expensive and inexpensive), offsetting the useful semantic content embedded in this vector spaces. The NBT-DNN model showed the same trends. |
\documentclass[11pt,a4paper]{article}
\usepackage[hyperref]{acl2017}
\aclfinalcopy % Uncomment this line for the final submission
\setcounter{dbltopnumber}{8}
\setcounter{topnumber}{2}
\setcounter{bottomnumber}{2}
\setcounter{totalnumber}{4}
\renewcommand{\topfraction}{0.85}
\renewcommand{\bottomfraction}{0.85}
\renewcommand{\textfraction}{0.15}
\renewcommand{\floatpagefraction}{0.7}
\DeclareMathOperator*{\argmax}{arg\,max}
\newcommand\BibTeX{B{\sc ib}\TeX}
\title{Neural Belief Tracker: Data-Driven Dialogue State Tracking}
\author{Nikola Mrk\v{s}i\'c$^{\mathbf{1}}$, ~ Diarmuid {\'O S\'eaghdha}$^{\mathbf{2}}$ \\
\textbf{Tsung-Hsien Wen$^{\mathbf{1}}$, ~ {Blaise Thomson$^{\mathbf{2}}$, ~ Steve Young$^{\mathbf{1}}$}} \\
$^{\mathbf{1}}$ University of Cambridge \\
$^{\mathbf{2}}$ Apple Inc. \\
{ \tt \{nm480, thw28, sjy\}@cam.ac.uk} \\ { \tt\{doseaghdha, blaisethom\}@apple.com}}
\date{}
\begin{document}
\maketitle
\begin{abstract}
One of the core components of modern spoken dialogue systems is the \textit{belief tracker}, which estimates the user's goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: \textbf{a)} Spoken Language Understanding models that require large amounts of annotated training data; or \textbf{b)} hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.
\end{abstract}
\section{Introduction}
Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The \emph{dialogue state tracking} (DST) component of an SDS serves to interpret user input and update the \emph{belief state}, which is the system's internal representation of the state of the conversation \cite{young:10c}. This is a probability distribution over dialogue states used by the downstream \emph{dialogue manager} to decide which action the system should perform next \cite{su:2016:nnpolicy,Su:16}; the system action is then verbalised by the {natural language generator} \cite{wen:15a,wen:15b,Dusek:15}.
The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets \cite{Williams:16}. In this framework, the dialogue system is supported by a \emph{domain ontology} which describes the range of user intents the system can process. The ontology defines a collection of \emph{slots} and the \emph{values} that each slot can take. The system must track the search constraints expressed by users (\emph{goals} or \emph{informable} slots) and questions the users ask about search results (\emph{requests}), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure \ref{fig:example-dialogue} shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output.
Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of {domain-specific} annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by \newcite{Henderson:14b}. Such coupled models typically rely on {manually constructed} semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure \ref{fig:sem_dict} gives an example of such a dictionary for three slot-value pairs. This approach, which we term \emph{delexicalisation}, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see \newcite{Vulic:2017}).
In this paper, we present two new models, collectively called the {Neural Belief Tracker} (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring {any} hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontology-defined intents have been expressed by the user up to that point in the conversation.
To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: \textbf{a)} NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and \textbf{b)} the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible. % comparable task-specific requirements
\section{Background}
Models for probabilistic dialogue state tracking, or \emph{belief tracking}, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user's goals \cite{Bohus:06,Williams:07,young:10c}. Modern dialogue management policies can learn to use a tracker's distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm \cite{Williams:13a,Henderson:14c,Henderson:14a}. In this setting, the task is defined by an \emph{ontology} that enumerates the goals a user can specify and the attributes of entities that the user can request information about. \iffalse\footnote{Alternative \emph{chat-bot} style systems do not make use of task ontologies or the pipeline model. Instead, these models learn to generate/choose system responses based on previous dialogue turns \cite{vinyals:15,Lowe:15,Serban:16,Serban:16b,Anjuli:17}. This means these models cannot interact with databases or react to user queries different from those encountered in their training data.} \fi Many different belief tracking models have been proposed in the literature, from generative \cite{Thomson:10} and discriminative \cite{Henderson:14b} statistical models to rule-based systems \cite{Wang:13}. To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:\footnote{The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders \cite{Williams:14,Williams:16}. This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both.}
\paragraph{Separate SLU} Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state \cite{Thomson:10,Wang:13,Lee:16,Perez:16,Perez:16b,Sun:16,Jang:16,Shi:2016,Dernoncourt:16a,Liu:2017,Vodolan:2017}. In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix \cite{Wang:94}. However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing \cite{Mairesse:09}. This line of work later shifted focus to robust handling of rich ASR output \cite{Henderson:12,Tur:13}. SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user's intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used \cite[i.a.]{Raymond:07,Yao:14,Celikyilmaz:2015,Mesnil:15,Peng:15,Zhang:16,Liu:16a,Vu:2016,Liu:16b}. Other approaches adopt a more complex modelling structure inspired by semantic parsing \cite{Saleh:14,Vlachos:14}. One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains.
\paragraph{Joint SLU/DST} Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output \cite{Henderson:14b,Sun:14,Zilka:15,Mrksic:15}. In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as \emph{delexicalisation} whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like $n$-gram features such as \textbf{[want \emph{tagged-value} food]}. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct \emph{semantic dictionaries} which list the potential rephrasings for all slot values. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains.
The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the available data by: \textbf{1)} leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; \textbf{2)} maximising the number of parameters shared across ontology values; and \textbf{3)} having the flexibility to learn domain-specific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy.
\section{Neural Belief Tracker}
The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user's goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal \textsc{food=Italian} has been expressed in \emph{`I'm looking for good pizza'}. To perform belief tracking, the NBT model \emph{iterates} over {all} candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user.
Figure \ref{fig:sys_diagram} presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance ($\mathbf{r}$), the {current} candidate slot-value pair ($\mathbf{c}$) and the system dialogue acts ($\mathbf{t_{q}, t_{s}, t_{v}}$). Subsequently, the learned vector representations interact through the \emph{context modelling} and \emph{semantic decoding} submodules to obtain the intermediate \emph{interaction summary} vectors $\mathbf{d_{r}, d_{c}}$ and $\mathbf{d}$. These are used as input to the final \emph{decision-making} module which decides whether the user expressed the intent represented by the candidate slot-value pair.
\subsection{Representation Learning}
For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, specialising word vectors to express \emph{semantic similarity} rather than \emph{relatedness} is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors \cite{Wieting:15} throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e.~\emph{inexpensive} to \emph{cheap}) will be recognised purely by their position in the original vector space (see also Rockt\"aschel et al.~\shortcite{rocktaschel:2016}). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots.
Let $u$ represent a user utterance consisting of $k_u$ words $u_1, u_2, \ldots, u_{k_u}$. Each word has an associated word vector $\mathbf{u}_1, \ldots, \mathbf{u}_{k_u}$.
We propose two model variants which differ in the method used to produce vector representations of $u$: \textsc{NBT-DNN} and \textsc{NBT-CNN}. Both act over the constituent $n$-grams of the utterance. Let $\mathbf{v}_{i}^{n}$ be the concatenation of the $n$ word vectors starting at index $i$, so that:
\begin{equation}
\mathbf{v}_{i}^{n} = \mathbf{u}_{i} \oplus \ldots \oplus \mathbf{u}_{i+n-1}
\end{equation}
\noindent where $\oplus$ denotes vector concatenation. The simpler of our two models, which we term \textsc{NBT-DNN}, is shown in Figure \ref{fig:nbt_dnn}. This model computes cumulative $n$-gram representation vectors $\mathbf{r}_{1}$, $\mathbf{r}_{2}$ and $\mathbf{r}_{3}$, which are the $n$-gram `summaries' of the unigrams, bigrams and trigrams in the user utterance:% For $n = 1,2,3$, these are expressed as:
\begin{equation}
\mathbf{r}_{n} = \sum_{i=1}^{k_u-n+1}{\mathbf{v}_{i}^{n}} %, ~~
\end{equation}
\noindent Each of these vectors is then non-linearly mapped to intermediate representations of the same size: %For $n$-gram lengths of $1,2,3$, these are given by:
\begin{equation}
\mathbf{r}_{n}' = \sigma (W_{n}^{s}\mathbf{r}_{n} + b_{n}^{s})
\end{equation}
\noindent where the weight matrices and bias terms map the cumulative $n$-grams to vectors of the same dimensionality and $\sigma$ denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript $s$). The three vectors are then summed to obtain a single representation for the user utterance:
\begin{equation}
\mathbf{r} ~ = ~ \mathbf{r}_{1}' + \mathbf{r}_{2}' +\mathbf{r}_{3}' \label{eqn:r} \\
\end{equation}
The cumulative $n$-gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values.
\paragraph{\textsc{NBT-CNN}} Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding \cite{Collobert:11,Kalchbrenner:14,Kim:14}. These models typically apply a number of convolutional filters to $n$-grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the \textsc{NBT-CNN} model applies $L=300$ different filters for $n$-gram lengths of $1,2$ and $3$ (Figure \ref{fig:nbt_cnn}). Let $F_{n}^{s} \in R^{L \times nD}$ denote the collection of filters for each value of $n$, where $D = 300$ is the word vector dimensionality. If $\mathbf{v}_{i}^{n}$ denotes the concatenation of $n$ word vectors starting at index $i$, let $\mathbf{m}_{n} = [\mathbf{v}_{1}^{n}; \mathbf{v}_{2}^{n}; \ldots; \mathbf{v}_{k_u-n+1}^{n}]$ be the list of $n$-grams that convolutional filters of length $n$ run over. The three intermediate representations are then given by:
\begin{equation}
{R}_{n} = F_n^s ~ \mathbf{m}_n
\end{equation}
Each column of the intermediate matrices ${R}_n$ is produced by a single convolutional filter of length $n$. We obtain summary $n$-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function \cite{Nair:2010icml} and max-pooling over time (i.e.~columns of the matrix) to get a single feature for each of the $L$ filters applied to the utterance:
\begin{equation}
\mathbf{r}_{n}' = \mathtt{maxpool} \left( \mathtt{ReLU} \left( {R}_{n} + b_{n}^{s} \right) \right) %\\
\end{equation}
\noindent where $b_{n}^{s}$ is a bias term broadcast across all filters. Finally, the three summary $n$-gram representations are summed to obtain the final utterance representation vector $\mathbf{r}$ (as in Equation \ref{eqn:r}). The \textsc{NBT-CNN} model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the \textsc{NBT-DNN}'s cumulative $n$-grams.
\subsection{Semantic Decoding}
The NBT diagram in Figure \ref{fig:sys_diagram} shows that the utterance representation $\mathbf{r}$ and the candidate slot-value pair representation $\mathbf{c}$ directly interact through the \emph{semantic decoding} module. This component decides whether the user explicitly expressed an intent matching the current candidate pair (i.e.~without taking the dialogue context into account). Examples of such matches would be \emph{`I want Thai food'} with \texttt{food=Thai} or more demanding ones such as \emph{`a pricey restaurant'} with \texttt{price=expensive}. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a {semantic dictionary} listing all potential rephrasings for each value in the domain ontology.
Let the vector space representations of a candidate pair's slot name and value be given by $\mathbf{c_{s}}$ and $\mathbf{c_{v}}$ (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector $\mathbf{c}$ of the same dimensionality as the utterance representation $\mathbf{r}$. These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express:
\begin{align}
\mathbf{c} ~ &= \sigma \big( W_{c}^{s} (\mathbf{c_{s}} + \mathbf{c_{v}}) + b_{c}^{s} \big) \\
\mathbf{d} &= \mathbf{r} \otimes \mathbf{c}
\end{align}
\noindent where $\otimes$ denotes \emph{element-wise} vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in $\mathbf{d}$ to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in $\mathbf{r}$ and $\mathbf{c}$.\footnote{We also tried to concatenate $\mathbf{r}$ and $\mathbf{c}$ and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors.}
\subsection{Context Modelling}
This `decoder' does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of \emph{context}, i.e.~the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two \emph{system acts}:
\begin{enumerate}
\item \textbf{System Request}: The system asks the user about the value of a specific slot $T_{q}$. If the system utterance is: \emph{`what price range would you like?'} and the user answers with \emph{any}, the model must infer the reference to \emph{price range}, and not to other slots such as \emph{area} or \emph{food}.
\item \textbf{System Confirm:} The system asks the user to confirm whether a specific slot-value pair $(T_{s}, T_{v})$ is part of their desired constraints. For example, if the user responds to \emph{`how about Turkish food?'} with \emph{`yes'}, the model must be aware of the system act in order to correctly update the belief state.
\end{enumerate}
If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let $\mathbf{t_{q}}$ and $(\mathbf{t_{s}}, \mathbf{t_{v}})$ be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair $(\mathbf{c_{s}}, \mathbf{c_{v}})$ and utterance representation $\mathbf{r}$:
\begin{align}
\mathbf{m_{r}} &= ~ (\mathbf{c_{s}} \cdot \mathbf{t_{q}}) \mathbf{r} \\
\mathbf{m_{c}} &= ~ ( \mathbf{c_{s}} \cdot \mathbf{t_{s}} ) ( \mathbf{c_{v}} \cdot \mathbf{t_{v}} ) \mathbf{r}
\end{align}
\noindent where $\cdot$ denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the \emph{three-way interaction} between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision.
\paragraph{Binary Decision Maker} The intermediate representations are passed through another hidden layer and then combined. If $\phi_{dim}(\mathbf{x}) = \sigma (W \mathbf{x} + b)$ is a layer which maps input vector $\mathbf{x}$ to a vector of size $dim$, the input to the final binary softmax (which represents the decision) is given by:
\begin{align*}
\mathbf{y} &= \phi_{2} \big( \phi_{100}(\mathbf{d}) + \phi_{100}({\mathbf{m_r}}) + \phi_{100}({\mathbf{m_c}}) \big)
\end{align*}
\section{Belief State Update Mechanism}
In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments.
In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR $N$-best lists. For dialogue turn $t$, let $sys^{t-1}$ denote the preceding system output, and let $h^{t}$ denote the list of $N$ ASR hypotheses $h_{i}^{t}$ with posterior probabilities $p_{i}^{t}$. For any hypothesis $h^{t}_{i}$, slot $s$ and slot value $v \in V_{s}$, NBT models estimate $\mathbb{P}(s, v \mid h^{t}_{i}, sys^{t-1})$, which is the (turn-level) probability that $(s, v)$ was expressed in the given hypothesis. The predictions for $N$ such hypotheses are then combined as: \\
\begin{equation*}
\mathbb{P}(s, v \mid h^{t}, sys^{t-1}) = \sum_{i=1}^{N} p_{i}^{t} ~ \mathbb{P}\left( s,v \mid h_{i}^{t}, sys^{t}\right)
\end{equation*}
This turn-level belief state estimate is then combined with the (cumulative) belief state up to time $(t-1)$ to get the updated belief state estimate:
\begin{eqnarray*}
\mathbb{P}(s, v \mid h^{1:t}, sys^{1:t-1}) ~=~ \lambda ~ \mathbb{P}\left(s, v \mid h^{t}, sys^{t-1}\right) \\
+ ~ (1 - \lambda) ~ \mathbb{P}\left(s, v \mid h^{1:t-1}, sys^{1:t-2}\right)
\end{eqnarray*}
\noindent where $\lambda$ is the coefficient which determines the relative weight of the turn-level and previous turns' belief state estimates.\footnote{This coefficient was tuned on the DSTC2 development set. The best performance was achieved with $\lambda = 0.55$.} For slot $s$, the set of its \emph{detected values} at turn $t$ is then given by:
\begin{equation*}
V_{s}^{t} = \lbrace {v \in V_{s}} ~ \mid ~ {\mathbb{P}\left( s,v \mid h^{1:t}, sys^{1:t-1} \right) \geq 0.5} \rbrace
\end{equation*}
For informable (i.e.~goal-tracking) slots, the value in $V_{s}^{t}$ with the highest probability is chosen as the current goal (if $V_{s}^{t} \neq \lbrace \emptyset \rbrace $). For requests, all slots in $V_{req}^{t}$ are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns.
\section{Experiments}
\subsection{Datasets}
Two datasets were used for training and evaluation. Both consist of user conversations with task-oriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three \emph{informable} (i.e.~goal-tracking) slots: \textsc{food}, \textsc{area} and \textsc{price}. The users can specify {values} for these slots in order to find restaurants which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight \emph{requestable} slots (\textsc{phone number, address}, etc.). The two datasets are:
\begin{enumerate}
\item \textbf{DSTC2}: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 \cite{Henderson:14a}. The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at \url{mi.eng.cam.ac.uk/~nm480/dstc2-clean.zip}. The training data contains 2207 dialogues \iffalse (15,611 dialogue turns) \fi and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge.
\item \textbf{WOZ 2.0}: Wen et al.~\shortcite{Wen:16} performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model's capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system's (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al.~\shortcite{Wen:16} using the same data collection procedure, yielding a total of 1200 dialogues. \iffalse (5,012 turns). \fi We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at \url{mi.eng.cam.ac.uk/~nm480/woz_2.0.zip}.
\end{enumerate}
\paragraph{Training Examples} The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for \emph{each} of the slot-value pairs in the ontology. An example consists of a transcription, its context (i.e.~list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example's candidate pair. For instance, `\emph{I would like Irish food}' would generate a positive example for candidate pair \textsc{food={Irish}}, and a negative example for every other slot-value pair in the ontology.
\paragraph{Evaluation} We focus on two key evaluation metrics introduced in \cite{Henderson:14a}: \vspace{-0mm}
\begin{enumerate}
\item \textbf{Goals} (`joint goal accuracy'): the proportion of dialogue turns where all the user's search goal constraints were correctly identified; \vspace{-0mm}
\item \textbf{Requests}: similarly, the proportion of dialogue turns where user's requests for information were identified correctly. \vspace{-0mm}
\end{enumerate}
\subsection{Models}
We evaluate two NBT model variants: \textsc{NBT-DNN} and \textsc{NBT-CNN}. To train the models, we use the Adam optimizer \cite{Adam:15} with cross-entropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.\footnote{Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to $0.001$, and $\frac{1}{8}$th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to $\left[-2.0, 2.0\right]$) was used to handle exploding gradients. Dropout \cite{Srivastava:2014} was used for regularisation (with 50\% dropout rate on all intermediate representations). Both \textsc{NBT} models were implemented in TensorFlow \cite{tf:15}. }
\paragraph{Baseline Models} For each of the two datasets, we compare the NBT models to:
\begin{enumerate}
\item A baseline system that implements a well-known competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al.~\shortcite{Henderson:14d,Henderson:14b}. This model is an $n$-gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in \cite{Wen:16}. This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike \textsc{NBT-CNN}, their CNN operates not over vectors, but over delexicalised features akin to those used by \newcite{Henderson:14d}.
\item The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available at \url{mi.eng.cam.ac.uk/\~nm480/sem-dict.zip}. The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect.~6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system's inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons.
\end{enumerate}
Both baseline models map exact matches of ontology-defined intents (and their lexicon-specified rephrasings) to one-hot delexicalised $n$-gram features. This means that pre-trained vectors cannot be incorporated directly into these models.
\section{Results}
\subsection{Belief Tracking Performance}
Table \ref{tab:dstc2_performance} shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both {joint goal} and request accuracies. For goals, the gains are \emph{always} statistically significant (paired $t$-test, $p<0.05$). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries.
While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT's ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models' goal accuracy rises to 0.96. This indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics.
\subsection{The Importance of Word Vector Spaces}
The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table \ref{tab:wv_comparison} shows the performance of \textsc{NBT-CNN}\footnote{The \textsc{NBT-DNN} model showed the same trends. For brevity, Table \ref{tab:wv_comparison} presents only the \textsc{NBT-CNN} figures. } models making use of three different word vector collections: \textbf{1)} `random' word vectors initialised using the \textsc{xavier} initialisation \cite{Glorot:2010aistats}; \textbf{2)} distributional GloVe vectors \cite{Pennington:14}, trained using co-occurrence information in large textual corpora; and \textbf{3)} \emph{semantically specialised} Paragram-SL999 vectors \cite{Wieting:15}, which are obtained by injecting \emph{semantic similarity constraints} from the Paraphrase Database \cite{ppdb:13} into the distributional GloVe vectors in order to improve their semantic content.
The results in Table \ref{tab:wv_comparison} show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and \textsc{xavier} vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g.~\emph{north} and \emph{south}, \emph{expensive} and \emph{inexpensive}), offsetting the useful semantic content embedded in this vector spaces.
\section{Conclusion}
In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that \emph{semantic specialisation} boosts performance in downstream tasks.
In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation.
\section*{Acknowledgements}
The authors would like to thank Ivan Vuli\'{c}, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions.
\clearpage
\bibliographystyle{acl2017}
\clearpage
\end{document}
|
Neural Belief Tracker: Data-Driven Dialogue State Tracking | 1606.03777 | Table 1: DSTC2 and WOZ 2.0 test set accuracies for: a) joint goals; and b) turn-level requests. The asterisk indicates statistically significant improvement over the baseline trackers (paired t-test; p<0.05). | [
"[BOLD] DST Model",
"[BOLD] DSTC2 [BOLD] Goals",
"[BOLD] DSTC2 [BOLD] Requests",
"[BOLD] WOZ 2.0 [BOLD] Goals",
"[BOLD] WOZ 2.0 [BOLD] Requests"
] | [
[
"[BOLD] Delexicalisation-Based Model",
"69.1",
"95.7",
"70.8",
"87.1"
],
[
"[BOLD] Delexicalisation-Based Model + Semantic Dictionary",
"72.9*",
"95.7",
"83.7*",
"87.6"
],
[
"Neural Belief Tracker: NBT-DNN",
"72.6*",
"96.4",
"[BOLD] 84.4*",
"91.2*"
],
[
"Neural Belief Tracker: NBT-CNN",
"[BOLD] 73.4*",
"[BOLD] 96.5",
"84.2*",
"[BOLD] 91.6*"
]
] | The NBT models outperformed the baseline models in terms of both joint goal and request accuracies. For goals, the gains are always statistically significant (paired t-test, p<0.05). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries. |
\documentclass[11pt,a4paper]{article}
\usepackage[hyperref]{acl2017}
\aclfinalcopy % Uncomment this line for the final submission
\setcounter{dbltopnumber}{8}
\setcounter{topnumber}{2}
\setcounter{bottomnumber}{2}
\setcounter{totalnumber}{4}
\renewcommand{\topfraction}{0.85}
\renewcommand{\bottomfraction}{0.85}
\renewcommand{\textfraction}{0.15}
\renewcommand{\floatpagefraction}{0.7}
\DeclareMathOperator*{\argmax}{arg\,max}
\newcommand\BibTeX{B{\sc ib}\TeX}
\title{Neural Belief Tracker: Data-Driven Dialogue State Tracking}
\author{Nikola Mrk\v{s}i\'c$^{\mathbf{1}}$, ~ Diarmuid {\'O S\'eaghdha}$^{\mathbf{2}}$ \\
\textbf{Tsung-Hsien Wen$^{\mathbf{1}}$, ~ {Blaise Thomson$^{\mathbf{2}}$, ~ Steve Young$^{\mathbf{1}}$}} \\
$^{\mathbf{1}}$ University of Cambridge \\
$^{\mathbf{2}}$ Apple Inc. \\
{ \tt \{nm480, thw28, sjy\}@cam.ac.uk} \\ { \tt\{doseaghdha, blaisethom\}@apple.com}}
\date{}
\begin{document}
\maketitle
\begin{abstract}
One of the core components of modern spoken dialogue systems is the \textit{belief tracker}, which estimates the user's goal at every step of the dialogue. However, most current approaches have difficulty scaling to larger, more complex dialogue domains. This is due to their dependency on either: \textbf{a)} Spoken Language Understanding models that require large amounts of annotated training data; or \textbf{b)} hand-crafted lexicons for capturing some of the linguistic variation in users' language. We propose a novel Neural Belief Tracking (NBT) framework which overcomes these problems by building on recent advances in representation learning. NBT models reason over pre-trained word vectors, learning to compose them into distributed representations of user utterances and dialogue context. Our evaluation on two datasets shows that this approach surpasses past limitations, matching the performance of state-of-the-art models which rely on hand-crafted semantic lexicons and outperforming them when such lexicons are not provided.
\end{abstract}
\section{Introduction}
Spoken dialogue systems (SDS) allow users to interact with computer applications through conversation. Task-based systems help users achieve goals such as finding restaurants or booking flights. The \emph{dialogue state tracking} (DST) component of an SDS serves to interpret user input and update the \emph{belief state}, which is the system's internal representation of the state of the conversation \cite{young:10c}. This is a probability distribution over dialogue states used by the downstream \emph{dialogue manager} to decide which action the system should perform next \cite{su:2016:nnpolicy,Su:16}; the system action is then verbalised by the {natural language generator} \cite{wen:15a,wen:15b,Dusek:15}.
The Dialogue State Tracking Challenge (DSTC) series of shared tasks has provided a common evaluation framework accompanied by labelled datasets \cite{Williams:16}. In this framework, the dialogue system is supported by a \emph{domain ontology} which describes the range of user intents the system can process. The ontology defines a collection of \emph{slots} and the \emph{values} that each slot can take. The system must track the search constraints expressed by users (\emph{goals} or \emph{informable} slots) and questions the users ask about search results (\emph{requests}), taking into account each user utterance (input via a speech recogniser) and the dialogue context (e.g., what the system just said). The example in Figure \ref{fig:example-dialogue} shows the true state after each user utterance in a three-turn conversation. As can be seen in this example, DST models depend on identifying mentions of ontology items in user utterances. This becomes a non-trivial task when confronted with lexical variation, the dynamics of context and noisy automated speech recognition (ASR) output.
Traditional statistical approaches use separate Spoken Language Understanding (SLU) modules to address lexical variability within a single dialogue turn. However, training such models requires substantial amounts of {domain-specific} annotation. Alternatively, turn-level SLU and cross-turn DST can be coalesced into a single model to achieve superior belief tracking performance, as shown by \newcite{Henderson:14b}. Such coupled models typically rely on {manually constructed} semantic dictionaries to identify alternative mentions of ontology items that vary lexically or morphologically. Figure \ref{fig:sem_dict} gives an example of such a dictionary for three slot-value pairs. This approach, which we term \emph{delexicalisation}, is clearly not scalable to larger, more complex dialogue domains. Importantly, the focus on English in DST research understates the considerable challenges that morphology poses to systems based on exact matching in morphologically richer languages such as Italian or German (see \newcite{Vulic:2017}).
In this paper, we present two new models, collectively called the {Neural Belief Tracker} (NBT) family. The proposed models couple SLU and DST, efficiently learning to handle variation without requiring {any} hand-crafted resources. To do that, NBT models move away from exact matching and instead reason entirely over pre-trained word vectors. The vectors making up the user utterance and preceding system output are first composed into intermediate representations. These representations are then used to decide which of the ontology-defined intents have been expressed by the user up to that point in the conversation.
To the best of our knowledge, NBT models are the first to successfully use pre-trained word vector spaces to improve the language understanding capability of belief tracking models. In evaluation on two datasets, we show that: \textbf{a)} NBT models match the performance of delexicalisation-based models which make use of hand-crafted semantic lexicons; and \textbf{b)} the NBT models significantly outperform those models when such resources are not available. Consequently, we believe this work proposes a framework better-suited to scaling belief tracking models for deployment in real-world dialogue systems operating over sophisticated application domains where the creation of such domain-specific lexicons would be infeasible. % comparable task-specific requirements
\section{Background}
Models for probabilistic dialogue state tracking, or \emph{belief tracking}, were introduced as components of spoken dialogue systems in order to better handle noisy speech recognition and other sources of uncertainty in understanding a user's goals \cite{Bohus:06,Williams:07,young:10c}. Modern dialogue management policies can learn to use a tracker's distribution over intents to decide whether to execute an action or request clarification from the user. As mentioned above, the DSTC shared tasks have spurred research on this problem and established a standard evaluation paradigm \cite{Williams:13a,Henderson:14c,Henderson:14a}. In this setting, the task is defined by an \emph{ontology} that enumerates the goals a user can specify and the attributes of entities that the user can request information about. \iffalse\footnote{Alternative \emph{chat-bot} style systems do not make use of task ontologies or the pipeline model. Instead, these models learn to generate/choose system responses based on previous dialogue turns \cite{vinyals:15,Lowe:15,Serban:16,Serban:16b,Anjuli:17}. This means these models cannot interact with databases or react to user queries different from those encountered in their training data.} \fi Many different belief tracking models have been proposed in the literature, from generative \cite{Thomson:10} and discriminative \cite{Henderson:14b} statistical models to rule-based systems \cite{Wang:13}. To motivate the work presented here, we categorise prior research according to their reliance (or otherwise) on a separate SLU module for interpreting user utterances:\footnote{The best-performing models in DSTC2 all used both raw ASR output and the output of (potentially more than one) SLU decoders \cite{Williams:14,Williams:16}. This does not mean that those models are immune to the drawbacks identified here for the two model categories; in fact, they share the drawbacks of both.}
\paragraph{Separate SLU} Traditional SDS pipelines use Spoken Language Understanding (SLU) decoders to detect slot-value pairs expressed in the Automatic Speech Recognition (ASR) output. The downstream DST model then combines this information with the past dialogue context to update the belief state \cite{Thomson:10,Wang:13,Lee:16,Perez:16,Perez:16b,Sun:16,Jang:16,Shi:2016,Dernoncourt:16a,Liu:2017,Vodolan:2017}. In the DSTC challenges, some systems used the output of template-based matching systems such as Phoenix \cite{Wang:94}. However, more robust and accurate statistical SLU systems are available. Many discriminative approaches to spoken dialogue SLU train independent binary models that decide whether each slot-value pair was expressed in the user utterance. Given enough data, these models can learn which lexical features are good indicators for a given value and can capture elements of paraphrasing \cite{Mairesse:09}. This line of work later shifted focus to robust handling of rich ASR output \cite{Henderson:12,Tur:13}. SLU has also been treated as a sequence labelling problem, where each word in an utterance is labelled according to its role in the user's intent; standard labelling models such as CRFs or Recurrent Neural Networks can then be used \cite[i.a.]{Raymond:07,Yao:14,Celikyilmaz:2015,Mesnil:15,Peng:15,Zhang:16,Liu:16a,Vu:2016,Liu:16b}. Other approaches adopt a more complex modelling structure inspired by semantic parsing \cite{Saleh:14,Vlachos:14}. One drawback shared by these methods is their resource requirements, either because they need to learn independent parameters for each slot and value or because they need fine-grained manual annotation at the word level. This hinders scaling to larger, more realistic application domains.
\paragraph{Joint SLU/DST} Research on belief tracking has found it advantageous to reason about SLU and DST jointly, taking ASR predictions as input and generating belief states as output \cite{Henderson:14b,Sun:14,Zilka:15,Mrksic:15}. In DSTC2, systems which used no external SLU module outperformed all systems that only used external SLU features. Joint models typically rely on a strategy known as \emph{delexicalisation} whereby slots and values mentioned in the text are replaced with generic labels. Once the dataset is transformed in this manner, one can extract a collection of template-like $n$-gram features such as \textbf{[want \emph{tagged-value} food]}. To perform belief tracking, the shared model iterates over all slot-value pairs, extracting delexicalised feature vectors and making a separate binary decision regarding each pair. Delexicalisation introduces a hidden dependency that is rarely discussed: how do we identify slot/value mentions in text? For toy domains, one can manually construct \emph{semantic dictionaries} which list the potential rephrasings for all slot values. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, the use of such dictionaries is essential for the performance of current delexicalisation-based models. Again though, this will not scale to the rich variety of user language or to general domains.
The primary motivation for the work presented in this paper is to overcome the limitations that affect previous belief tracking models. The NBT model efficiently learns from the available data by: \textbf{1)} leveraging semantic information from pre-trained word vectors to resolve lexical/morphological ambiguity; \textbf{2)} maximising the number of parameters shared across ontology values; and \textbf{3)} having the flexibility to learn domain-specific paraphrasings and other kinds of variation that make it infeasible to rely on exact matching and delexicalisation as a robust strategy.
\section{Neural Belief Tracker}
The Neural Belief Tracker (NBT) is a model designed to detect the slot-value pairs that make up the user's goal at a given turn during the flow of dialogue. Its input consists of the system dialogue acts preceding the user input, the user utterance itself, and a single candidate slot-value pair that it needs to make a decision about. For instance, the model might have to decide whether the goal \textsc{food=Italian} has been expressed in \emph{`I'm looking for good pizza'}. To perform belief tracking, the NBT model \emph{iterates} over {all} candidate slot-value pairs (defined by the ontology), and decides which ones have just been expressed by the user.
Figure \ref{fig:sys_diagram} presents the flow of information in the model. The first layer in the NBT hierarchy performs representation learning given the three model inputs, producing vector representations for the user utterance ($\mathbf{r}$), the {current} candidate slot-value pair ($\mathbf{c}$) and the system dialogue acts ($\mathbf{t_{q}, t_{s}, t_{v}}$). Subsequently, the learned vector representations interact through the \emph{context modelling} and \emph{semantic decoding} submodules to obtain the intermediate \emph{interaction summary} vectors $\mathbf{d_{r}, d_{c}}$ and $\mathbf{d}$. These are used as input to the final \emph{decision-making} module which decides whether the user expressed the intent represented by the candidate slot-value pair.
\subsection{Representation Learning}
For any given user utterance, system act(s) and candidate slot-value pair, the representation learning submodules produce vector representations which act as input for the downstream components of the model. All representation learning subcomponents make use of pre-trained collections of word vectors. As shown by Mrk{\v{s}}i\'c et al.~\shortcite{Mrksic:16}, specialising word vectors to express \emph{semantic similarity} rather than \emph{relatedness} is essential for improving belief tracking performance. For this reason, we use the semantically-specialised Paragram-SL999 word vectors \cite{Wieting:15} throughout this work. The NBT training procedure keeps these vectors fixed: that way, at test time, unseen words semantically related to familiar slot values (i.e.~\emph{inexpensive} to \emph{cheap}) will be recognised purely by their position in the original vector space (see also Rockt\"aschel et al.~\shortcite{rocktaschel:2016}). This means that the NBT model parameters can be shared across all values of the given slot, or even across all slots.
Let $u$ represent a user utterance consisting of $k_u$ words $u_1, u_2, \ldots, u_{k_u}$. Each word has an associated word vector $\mathbf{u}_1, \ldots, \mathbf{u}_{k_u}$.
We propose two model variants which differ in the method used to produce vector representations of $u$: \textsc{NBT-DNN} and \textsc{NBT-CNN}. Both act over the constituent $n$-grams of the utterance. Let $\mathbf{v}_{i}^{n}$ be the concatenation of the $n$ word vectors starting at index $i$, so that:
\begin{equation}
\mathbf{v}_{i}^{n} = \mathbf{u}_{i} \oplus \ldots \oplus \mathbf{u}_{i+n-1}
\end{equation}
\noindent where $\oplus$ denotes vector concatenation. The simpler of our two models, which we term \textsc{NBT-DNN}, is shown in Figure \ref{fig:nbt_dnn}. This model computes cumulative $n$-gram representation vectors $\mathbf{r}_{1}$, $\mathbf{r}_{2}$ and $\mathbf{r}_{3}$, which are the $n$-gram `summaries' of the unigrams, bigrams and trigrams in the user utterance:% For $n = 1,2,3$, these are expressed as:
\begin{equation}
\mathbf{r}_{n} = \sum_{i=1}^{k_u-n+1}{\mathbf{v}_{i}^{n}} %, ~~
\end{equation}
\noindent Each of these vectors is then non-linearly mapped to intermediate representations of the same size: %For $n$-gram lengths of $1,2,3$, these are given by:
\begin{equation}
\mathbf{r}_{n}' = \sigma (W_{n}^{s}\mathbf{r}_{n} + b_{n}^{s})
\end{equation}
\noindent where the weight matrices and bias terms map the cumulative $n$-grams to vectors of the same dimensionality and $\sigma$ denotes the sigmoid activation function. We maintain a separate set of parameters for each slot (indicated by superscript $s$). The three vectors are then summed to obtain a single representation for the user utterance:
\begin{equation}
\mathbf{r} ~ = ~ \mathbf{r}_{1}' + \mathbf{r}_{2}' +\mathbf{r}_{3}' \label{eqn:r} \\
\end{equation}
The cumulative $n$-gram representations used by this model are just unweighted sums of all word vectors in the utterance. Ideally, the model should learn to recognise which parts of the utterance are more relevant for the subsequent classification task. For instance, it could learn to ignore verbs or stop words and pay more attention to adjectives and nouns which are more likely to express slot values.
\paragraph{\textsc{NBT-CNN}} Our second model draws inspiration from successful applications of Convolutional Neural Networks (CNNs) for language understanding \cite{Collobert:11,Kalchbrenner:14,Kim:14}. These models typically apply a number of convolutional filters to $n$-grams in the input sentence, followed by non-linear activation functions and max-pooling. Following this approach, the \textsc{NBT-CNN} model applies $L=300$ different filters for $n$-gram lengths of $1,2$ and $3$ (Figure \ref{fig:nbt_cnn}). Let $F_{n}^{s} \in R^{L \times nD}$ denote the collection of filters for each value of $n$, where $D = 300$ is the word vector dimensionality. If $\mathbf{v}_{i}^{n}$ denotes the concatenation of $n$ word vectors starting at index $i$, let $\mathbf{m}_{n} = [\mathbf{v}_{1}^{n}; \mathbf{v}_{2}^{n}; \ldots; \mathbf{v}_{k_u-n+1}^{n}]$ be the list of $n$-grams that convolutional filters of length $n$ run over. The three intermediate representations are then given by:
\begin{equation}
{R}_{n} = F_n^s ~ \mathbf{m}_n
\end{equation}
Each column of the intermediate matrices ${R}_n$ is produced by a single convolutional filter of length $n$. We obtain summary $n$-gram representations by pushing these representations through a rectified linear unit (ReLU) activation function \cite{Nair:2010icml} and max-pooling over time (i.e.~columns of the matrix) to get a single feature for each of the $L$ filters applied to the utterance:
\begin{equation}
\mathbf{r}_{n}' = \mathtt{maxpool} \left( \mathtt{ReLU} \left( {R}_{n} + b_{n}^{s} \right) \right) %\\
\end{equation}
\noindent where $b_{n}^{s}$ is a bias term broadcast across all filters. Finally, the three summary $n$-gram representations are summed to obtain the final utterance representation vector $\mathbf{r}$ (as in Equation \ref{eqn:r}). The \textsc{NBT-CNN} model is (by design) better suited to longer utterances, as its convolutional filters interact directly with subsequences of the utterance, and not just their noisy summaries given by the \textsc{NBT-DNN}'s cumulative $n$-grams.
\subsection{Semantic Decoding}
The NBT diagram in Figure \ref{fig:sys_diagram} shows that the utterance representation $\mathbf{r}$ and the candidate slot-value pair representation $\mathbf{c}$ directly interact through the \emph{semantic decoding} module. This component decides whether the user explicitly expressed an intent matching the current candidate pair (i.e.~without taking the dialogue context into account). Examples of such matches would be \emph{`I want Thai food'} with \texttt{food=Thai} or more demanding ones such as \emph{`a pricey restaurant'} with \texttt{price=expensive}. This is where the use of high-quality pre-trained word vectors comes into play: a delexicalisation-based model could deal with the former example but would be helpless in the latter case, unless a human expert had provided a {semantic dictionary} listing all potential rephrasings for each value in the domain ontology.
Let the vector space representations of a candidate pair's slot name and value be given by $\mathbf{c_{s}}$ and $\mathbf{c_{v}}$ (with vectors of multi-word slot names/values summed together). The NBT model learns to map this tuple into a single vector $\mathbf{c}$ of the same dimensionality as the utterance representation $\mathbf{r}$. These two representations are then forced to interact in order to learn a similarity metric which discriminates between interactions of utterances with slot-value pairs that they either do or do not express:
\begin{align}
\mathbf{c} ~ &= \sigma \big( W_{c}^{s} (\mathbf{c_{s}} + \mathbf{c_{v}}) + b_{c}^{s} \big) \\
\mathbf{d} &= \mathbf{r} \otimes \mathbf{c}
\end{align}
\noindent where $\otimes$ denotes \emph{element-wise} vector multiplication. The dot product, which may seem like the more intuitive similarity metric, would reduce the rich set of features in $\mathbf{d}$ to a single scalar. The element-wise multiplication allows the downstream network to make better use of its parameters by learning non-linear interactions between sets of features in $\mathbf{r}$ and $\mathbf{c}$.\footnote{We also tried to concatenate $\mathbf{r}$ and $\mathbf{c}$ and pass that vector to the downstream decision-making neural network. However, this set-up led to very weak performance since our relatively small datasets did not suffice for the network to learn to model the interaction between the two feature vectors.}
\subsection{Context Modelling}
This `decoder' does not yet suffice to extract intents from utterances in human-machine dialogue. To understand some queries, the belief tracker must be aware of \emph{context}, i.e.~the flow of dialogue leading up to the latest user utterance. While all previous system and user utterances are important, the most relevant one is the last system utterance, in which the dialogue system could have performed (among others) one of the following two \emph{system acts}:
\begin{enumerate}
\item \textbf{System Request}: The system asks the user about the value of a specific slot $T_{q}$. If the system utterance is: \emph{`what price range would you like?'} and the user answers with \emph{any}, the model must infer the reference to \emph{price range}, and not to other slots such as \emph{area} or \emph{food}.
\item \textbf{System Confirm:} The system asks the user to confirm whether a specific slot-value pair $(T_{s}, T_{v})$ is part of their desired constraints. For example, if the user responds to \emph{`how about Turkish food?'} with \emph{`yes'}, the model must be aware of the system act in order to correctly update the belief state.
\end{enumerate}
If we make the Markovian decision to only consider the last set of system acts, we can incorporate context modelling into the NBT. Let $\mathbf{t_{q}}$ and $(\mathbf{t_{s}}, \mathbf{t_{v}})$ be the word vectors of the arguments for the system request and confirm acts (zero vectors if none). The model computes the following measures of similarity between the system acts, candidate pair $(\mathbf{c_{s}}, \mathbf{c_{v}})$ and utterance representation $\mathbf{r}$:
\begin{align}
\mathbf{m_{r}} &= ~ (\mathbf{c_{s}} \cdot \mathbf{t_{q}}) \mathbf{r} \\
\mathbf{m_{c}} &= ~ ( \mathbf{c_{s}} \cdot \mathbf{t_{s}} ) ( \mathbf{c_{v}} \cdot \mathbf{t_{v}} ) \mathbf{r}
\end{align}
\noindent where $\cdot$ denotes dot product. The computed similarity terms act as gating mechanisms which only pass the utterance representation through if the system asked about the current candidate slot or slot-value pair. This type of interaction is particularly useful for the confirm system act: if the system asks the user to confirm, the user is likely not to mention any slot values, but to just respond affirmatively or negatively. This means that the model must consider the \emph{three-way interaction} between the utterance, candidate slot-value pair and the slot value pair offered by the system. If (and only if) the latter two are the same should the model consider the affirmative or negative polarity of the user utterance when making the subsequent binary decision.
\paragraph{Binary Decision Maker} The intermediate representations are passed through another hidden layer and then combined. If $\phi_{dim}(\mathbf{x}) = \sigma (W \mathbf{x} + b)$ is a layer which maps input vector $\mathbf{x}$ to a vector of size $dim$, the input to the final binary softmax (which represents the decision) is given by:
\begin{align*}
\mathbf{y} &= \phi_{2} \big( \phi_{100}(\mathbf{d}) + \phi_{100}({\mathbf{m_r}}) + \phi_{100}({\mathbf{m_c}}) \big)
\end{align*}
\section{Belief State Update Mechanism}
In spoken dialogue systems, belief tracking models operate over the output of automatic speech recognition (ASR). Despite improvements to speech recognition, the need to make the most out of imperfect ASR will persist as dialogue systems are used in increasingly noisy environments.
In this work, we define a simple rule-based belief state update mechanism which can be applied to ASR $N$-best lists. For dialogue turn $t$, let $sys^{t-1}$ denote the preceding system output, and let $h^{t}$ denote the list of $N$ ASR hypotheses $h_{i}^{t}$ with posterior probabilities $p_{i}^{t}$. For any hypothesis $h^{t}_{i}$, slot $s$ and slot value $v \in V_{s}$, NBT models estimate $\mathbb{P}(s, v \mid h^{t}_{i}, sys^{t-1})$, which is the (turn-level) probability that $(s, v)$ was expressed in the given hypothesis. The predictions for $N$ such hypotheses are then combined as: \\
\begin{equation*}
\mathbb{P}(s, v \mid h^{t}, sys^{t-1}) = \sum_{i=1}^{N} p_{i}^{t} ~ \mathbb{P}\left( s,v \mid h_{i}^{t}, sys^{t}\right)
\end{equation*}
This turn-level belief state estimate is then combined with the (cumulative) belief state up to time $(t-1)$ to get the updated belief state estimate:
\begin{eqnarray*}
\mathbb{P}(s, v \mid h^{1:t}, sys^{1:t-1}) ~=~ \lambda ~ \mathbb{P}\left(s, v \mid h^{t}, sys^{t-1}\right) \\
+ ~ (1 - \lambda) ~ \mathbb{P}\left(s, v \mid h^{1:t-1}, sys^{1:t-2}\right)
\end{eqnarray*}
\noindent where $\lambda$ is the coefficient which determines the relative weight of the turn-level and previous turns' belief state estimates.\footnote{This coefficient was tuned on the DSTC2 development set. The best performance was achieved with $\lambda = 0.55$.} For slot $s$, the set of its \emph{detected values} at turn $t$ is then given by:
\begin{equation*}
V_{s}^{t} = \lbrace {v \in V_{s}} ~ \mid ~ {\mathbb{P}\left( s,v \mid h^{1:t}, sys^{1:t-1} \right) \geq 0.5} \rbrace
\end{equation*}
For informable (i.e.~goal-tracking) slots, the value in $V_{s}^{t}$ with the highest probability is chosen as the current goal (if $V_{s}^{t} \neq \lbrace \emptyset \rbrace $). For requests, all slots in $V_{req}^{t}$ are deemed to have been requested. As requestable slots serve to model single-turn user queries, they require no belief tracking across turns.
\section{Experiments}
\subsection{Datasets}
Two datasets were used for training and evaluation. Both consist of user conversations with task-oriented dialogue systems designed to help users find suitable restaurants around Cambridge, UK. The two corpora share the same domain ontology, which contains three \emph{informable} (i.e.~goal-tracking) slots: \textsc{food}, \textsc{area} and \textsc{price}. The users can specify {values} for these slots in order to find restaurants which best meet their criteria. Once the system suggests a restaurant, the users can ask about the values of up to eight \emph{requestable} slots (\textsc{phone number, address}, etc.). The two datasets are:
\begin{enumerate}
\item \textbf{DSTC2}: We use the transcriptions, ASR hypotheses and turn-level semantic labels provided for the Dialogue State Tracking Challenge 2 \cite{Henderson:14a}. The official transcriptions contain various spelling errors which we corrected manually; the cleaned version of the dataset is available at \url{mi.eng.cam.ac.uk/~nm480/dstc2-clean.zip}. The training data contains 2207 dialogues \iffalse (15,611 dialogue turns) \fi and the test set consists of 1117 dialogues. We train NBT models on transcriptions but report belief tracking performance on test set ASR hypotheses provided in the original challenge.
\item \textbf{WOZ 2.0}: Wen et al.~\shortcite{Wen:16} performed a Wizard of Oz style experiment in which Amazon Mechanical Turk users assumed the role of the system or the user of a task-oriented dialogue system based on the DSTC2 ontology. Users typed instead of using speech, which means performance in the WOZ experiments is more indicative of the model's capacity for semantic understanding than its robustness to ASR errors. Whereas in the DSTC2 dialogues users would quickly adapt to the system's (lack of) language understanding capability, the WOZ experimental design gave them freedom to use more sophisticated language. We expanded the original WOZ dataset from Wen et al.~\shortcite{Wen:16} using the same data collection procedure, yielding a total of 1200 dialogues. \iffalse (5,012 turns). \fi We divided these into 600 training, 200 validation and 400 test set dialogues. The WOZ 2.0 dataset is available at \url{mi.eng.cam.ac.uk/~nm480/woz_2.0.zip}.
\end{enumerate}
\paragraph{Training Examples} The two corpora are used to create training data for two separate experiments. For each dataset, we iterate over all train set utterances, generating one example for \emph{each} of the slot-value pairs in the ontology. An example consists of a transcription, its context (i.e.~list of preceding system acts) and a candidate slot-value pair. The binary label for each example indicates whether or not its utterance and context express the example's candidate pair. For instance, `\emph{I would like Irish food}' would generate a positive example for candidate pair \textsc{food={Irish}}, and a negative example for every other slot-value pair in the ontology.
\paragraph{Evaluation} We focus on two key evaluation metrics introduced in \cite{Henderson:14a}: \vspace{-0mm}
\begin{enumerate}
\item \textbf{Goals} (`joint goal accuracy'): the proportion of dialogue turns where all the user's search goal constraints were correctly identified; \vspace{-0mm}
\item \textbf{Requests}: similarly, the proportion of dialogue turns where user's requests for information were identified correctly. \vspace{-0mm}
\end{enumerate}
\subsection{Models}
We evaluate two NBT model variants: \textsc{NBT-DNN} and \textsc{NBT-CNN}. To train the models, we use the Adam optimizer \cite{Adam:15} with cross-entropy loss, backpropagating through all the NBT subcomponents while keeping the pre-trained word vectors fixed (in order to allow the model to deal with unseen words at test time). The model is trained separately for each slot. Due to the high class bias (most of the constructed examples are negative), we incorporate a fixed number of positive examples in each mini-batch.\footnote{Model hyperparameters were tuned on the respective validation sets. For both datasets, the initial Adam learning rate was set to $0.001$, and $\frac{1}{8}$th of positive examples were included in each mini-batch. The batch size did not affect performance: it was set to 256 in all experiments. Gradient clipping (to $\left[-2.0, 2.0\right]$) was used to handle exploding gradients. Dropout \cite{Srivastava:2014} was used for regularisation (with 50\% dropout rate on all intermediate representations). Both \textsc{NBT} models were implemented in TensorFlow \cite{tf:15}. }
\paragraph{Baseline Models} For each of the two datasets, we compare the NBT models to:
\begin{enumerate}
\item A baseline system that implements a well-known competitive delexicalisation-based model for that dataset. For DSTC2, the model is that of Henderson et al.~\shortcite{Henderson:14d,Henderson:14b}. This model is an $n$-gram based neural network model with recurrent connections between turns (but not inside utterances) which replaces occurrences of slot names and values with generic delexicalised features. For WOZ 2.0, we compare the NBT models to a more sophisticated belief tracking model presented in \cite{Wen:16}. This model uses an RNN for belief state updates and a CNN for turn-level feature extraction. Unlike \textsc{NBT-CNN}, their CNN operates not over vectors, but over delexicalised features akin to those used by \newcite{Henderson:14d}.
\item The same baseline model supplemented with a task-specific semantic dictionary (produced by the baseline system creators). The two dictionaries are available at \url{mi.eng.cam.ac.uk/\~nm480/sem-dict.zip}. The DSTC2 dictionary contains only three rephrasings. Nonetheless, the use of these rephrasings translates to substantial gains in DST performance (see Sect.~6.1). We believe this result supports our claim that the vocabulary used by Mechanical Turkers in DSTC2 was constrained by the system's inability to cope with lexical variation and ASR noise. The WOZ dictionary includes 38 rephrasings, showing that the unconstrained language used by Mechanical Turkers in the Wizard-of-Oz setup requires more elaborate lexicons.
\end{enumerate}
Both baseline models map exact matches of ontology-defined intents (and their lexicon-specified rephrasings) to one-hot delexicalised $n$-gram features. This means that pre-trained vectors cannot be incorporated directly into these models.
\section{Results}
\subsection{Belief Tracking Performance}
Table \ref{tab:dstc2_performance} shows the performance of NBT models trained and evaluated on DSTC2 and WOZ 2.0 datasets. The NBT models outperformed the baseline models in terms of both {joint goal} and request accuracies. For goals, the gains are \emph{always} statistically significant (paired $t$-test, $p<0.05$). Moreover, there was no statistically significant variation between the NBT and the lexicon-supplemented models, showing that the NBT can handle semantic relations which otherwise had to be explicitly encoded in semantic dictionaries.
While the NBT performs well across the board, we can compare its performance on the two datasets to understand its strengths. The improvement over the baseline is greater on WOZ 2.0, which corroborates our intuition that the NBT's ability to learn linguistic variation is vital for this dataset containing longer sentences, richer vocabulary and no ASR errors. By comparison, the language of the subjects in the DSTC2 dataset is less rich, and compensating for ASR errors is the main hurdle: given access to the DSTC2 test set transcriptions, the NBT models' goal accuracy rises to 0.96. This indicates that future work should focus on better ASR compensation if the model is to be deployed in environments with challenging acoustics.
\subsection{The Importance of Word Vector Spaces}
The NBT models use the semantic relations embedded in the pre-trained word vectors to handle semantic variation and produce high-quality intermediate representations. Table \ref{tab:wv_comparison} shows the performance of \textsc{NBT-CNN}\footnote{The \textsc{NBT-DNN} model showed the same trends. For brevity, Table \ref{tab:wv_comparison} presents only the \textsc{NBT-CNN} figures. } models making use of three different word vector collections: \textbf{1)} `random' word vectors initialised using the \textsc{xavier} initialisation \cite{Glorot:2010aistats}; \textbf{2)} distributional GloVe vectors \cite{Pennington:14}, trained using co-occurrence information in large textual corpora; and \textbf{3)} \emph{semantically specialised} Paragram-SL999 vectors \cite{Wieting:15}, which are obtained by injecting \emph{semantic similarity constraints} from the Paraphrase Database \cite{ppdb:13} into the distributional GloVe vectors in order to improve their semantic content.
The results in Table \ref{tab:wv_comparison} show that the use of semantically specialised word vectors leads to considerable performance gains: Paragram-SL999 vectors (significantly) outperformed GloVe and \textsc{xavier} vectors for goal tracking on both datasets. The gains are particularly robust for noisy DSTC2 data, where both collections of pre-trained vectors consistently outperformed random initialisation. The gains are weaker for the noise-free WOZ 2.0 dataset, which seems to be large (and clean) enough for the NBT model to learn task-specific rephrasings and compensate for the lack of semantic content in the word vectors. For this dataset, GloVe vectors do not improve over the randomly initialised ones. We believe this happens because distributional models keep related, yet antonymous words close together (e.g.~\emph{north} and \emph{south}, \emph{expensive} and \emph{inexpensive}), offsetting the useful semantic content embedded in this vector spaces.
\section{Conclusion}
In this paper, we have proposed a novel neural belief tracking (NBT) framework designed to overcome current obstacles to deploying dialogue systems in real-world dialogue domains. The NBT models offer the known advantages of coupling Spoken Language Understanding and Dialogue State Tracking, without relying on hand-crafted semantic lexicons to achieve state-of-the-art performance. Our evaluation demonstrated these benefits: the NBT models match the performance of models which make use of such lexicons and vastly outperform them when these are not available. Finally, we have shown that the performance of NBT models improves with the semantic quality of the underlying word vectors. To the best of our knowledge, we are the first to move past intrinsic evaluation and show that \emph{semantic specialisation} boosts performance in downstream tasks.
In future work, we intend to explore applications of the NBT for multi-domain dialogue systems, as well as in languages other than English that require handling of complex morphological variation.
\section*{Acknowledgements}
The authors would like to thank Ivan Vuli\'{c}, Ulrich Paquet, the Cambridge Dialogue Systems Group and the anonymous ACL reviewers for their constructive feedback and helpful discussions.
\clearpage
\bibliographystyle{acl2017}
\clearpage
\end{document}
|
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | Table 6: Impact of changing the target language on POS tagging accuracy. Self = German/Czech in rows 1/2 respectively. | [
"SourceTarget",
"English",
"Arabic",
"Self"
] | [
[
"German",
"93.5",
"92.7",
"89.3"
],
[
"Czech",
"75.7",
"75.2",
"71.8"
]
] | We report here results that were omitted from the paper due to the space limit. As noted in the paper, all the results consistently show that i) layer 1 performs better than layers 0 and 2; and ii) char-based representations are better than word-based for learning morphology. | \section{Motivation} \label{sec:motivation}
Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following:
\begin{itemize}
\item How do character-based models improve neural MT?
\item What components of the NMT system encoder word structure?
\item How does the target language affect the learning of word structure?
\item What is the role of the decoder in learning word representations?
\end{itemize}
In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions.
\section{Methodology}
Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first
generate a vector representation for the source sentence
using
an encoder (Eqn.\ \ref{eq:enc})
and then
map this vector to the target sentence
using
a decoder (Eqn.\ \ref{eq:dec})
\cite{sutskever2014sequence}:
\begin{align}
&\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\
&\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec}
\end{align}
In this work,
we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural},
which we train on parallel data.
After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing
words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We
feed $\texttt{ENC}_i(s)$ to a neural
classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier.
By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what
MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process.
We follow a similar procedure
for analyzing representation learning in $\texttt{DEC}$.
The
classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system,
we choose to model the classifier as a simple feed-forward
network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.}
We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models %
learn about morphology.
The classifier is trained with a cross-entropy loss;
more details on
its architecture are
in the supplementary material.
\documentclass[11pt,a4paper]{article}
\usepackage[hyperref]{acl2017}
\usepackage[normalem]{ulem}
% http://ctan.org/pkg/pifont
\newcommand{\cmark}{\ding{51}}%
\newcommand{\xmark}{\ding{55}}%
\newcommand\alert[1]{{\textcolor{red}{#1}}}
\aclfinalcopy % Uncomment this line for the final submission
\def\aclpaperid{496} % Enter the acl Paper ID here
\newcommand\BibTeX{B{\sc ib}\TeX}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\xx}{\mathbf{x}}
\newcommand{\ii}{\mathbf{i}}
\newcommand{\ff}{\mathbf{f}}
\newcommand{\oo}{\mathbf{o}}
\newcommand{\cc}{\mathbf{c}}
\newcommand{\bb}{\mathbf{b}}
\newcommand{\hh}{\mathbf{h}}
\newcommand{\uu}{\mathbf{u}}
\newcommand{\ww}{\mathbf{w}} % word representation
\newcommand{\sss}{\mathbf{s}} % sentence representation
\newcommand{\WW}{\mathbf{W}}
\newcommand{\mm}{\mathbf{m}} % memory
\newcommand{\aaa}{\mathbf{a}} % attention
\newcommand{\rr}{\mathbf{r}} % attention
\newcommand{\zz}{\mathbf{z}} % noise
\title{What do Neural Machine Translation Models Learn about Morphology?}
\author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\
$^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\
{\tt \{belinkov, glass\}@mit.edu} \\
$^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\
{\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa}
}
\date{}
\begin{document}
\maketitle
\begin{framed}
\noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5.
\end{framed}
\begin{abstract}
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process.
In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the
representations
for learning morphology
through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity
of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.}
\end{abstract}
\input{introduction}
\input{methodology}
\input{data}
\input{encoder-analysis}
\input{decoder-analysis}
\input{related-work}
\input{conclusion}
\section*{Acknowledgments}
We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions.
This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
\bibliographystyle{acl_natbib}
\newpage
\appendix
\input{supplement}
\end{document}
\section{Conclusion}
Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work,
we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions:
\begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*]
\item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words.
\item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning.
\item Translating into morphologically-poorer languages leads to better source-side
representations. This is partly, but not completely, %
correlated with BLEU scores. %
\item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations.
\end{itemize}
These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation.
Another area for future work is to extend the analysis to other word %
representations (e.g.\ byte-pair encoding),
deeper networks,
and more
semantically-oriented tasks such as
semantic role-labeling or
semantic parsing.
\section{Supplementary Material}
\label{sec:supplemental}
\subsection{Training Details} \label{sec:sup-training}
\paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective.
Training is run with
mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs.
\paragraph{Neural MT system}
We train a 2-layer LSTM encoder-decoder with attention.
We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a
kernel width of 6 characters. This model was found to be
useful for translating morphologically-rich languages
\cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is
used for extracting features for the classifier.
\subsection{Data and Taggers} \label{sec:sup-data}
\paragraph{Datasets}
All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}.
For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}}
\paragraph{POS and morphological taggers}
We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags.
These tools are recommended
on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}}
As mentioned before, our goal is not to achieve state-of-the-art
results, but rather to study what different components of the NMT architecture learn about word morphology.
Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies.
\subsection{Supplementary Results}
\label{sec:sup-results}
We report here results that were omitted from the paper due to the space limit.
Table \ref{tab:different_layers} shows encoder results using
different layers,
languages,
and
representations (word/char-based).
As noted in the paper,
all
the results consistently show that i) layer 1 performs better than
layers 0 and 2;
and
ii) char-based representations are better than word-based for learning morphology.
Table \ref{tab:different_language}
shows that translating into a morphologically-poor language (English) leads to better source
representations, and Table \ref{tab:decoder} provides
additional decoder results.
Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%).
Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word.
\newpage
\section{Introduction}
Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}.
The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}.
\bigskip
However,
little is known about what and how much these models
learn about each language and its features.
Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016},
but
research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language?
Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions:
\begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*]
\item Which parts of the NMT architecture capture word structure?
\item
What is the division of labor between different components (e.g.\ different layers or %of
encoder vs.\ decoder)?
\item How do different word representations help learn better morphology and modeling of infrequent words?
\item How does the target language affect the learning of word structure?
\end{itemize}
To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task.
We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task.
We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder.
We experiment with several languages with varying degrees of morphological richness:
French, German, Czech, Arabic, and Hebrew.\
Our analysis reveals interesting insights such as:
\begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*]
\item Character-based representations are much better for learning morphology,
especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words.
\item Lower layers of the
encoder are better at capturing word structure, while deeper networks improve translation quality,
suggesting that higher layers focus more
on word meaning.
\item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores.
\item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations.
\end{itemize}
\section{Data}
\paragraph{Language pairs
} We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden
our analysis
and study the effect of having morphologically-rich languages on both source and target sides, we also
include
Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies.
\paragraph{MT
data}
Our translation
models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred).
We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets.
\paragraph{Annotated data}
We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers
to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives
statistics for datasets with gold and predicted tags; see
supplementary material
for
details on
taggers and gold data.
We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available.
\section{Decoder Analysis} \label{sec:dec-analysis}
So far we only looked at the encoder. However, the decoder \texttt{DEC}
is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence.
These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with
predicted tags
as there are no
parallel data
with gold POS/morphological
tags
that we are aware of.} %
Note that in this case the decoder is given the correct target words one-by-one, similar to the
usual NMT training regime.
Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using
representations extracted with \texttt{ENC}
and \texttt{DEC}
from the Arabic-English and English-Arabic models, respectively.
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}).
The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is
to use this representation to generate the
target sentence in a specific language.
One might conjecture that it would be sufficient for the decoder to learn a strong language model in order %
to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable.
In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder.
\subsection{Effect of attention}
Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt
the quality of the target word representations learned by the decoder.
To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations.
As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis.
\subsection{Effect of word representation}
We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations.
Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based %
vs.\ char-based representations in the encoder and decoder.
In both bases, char-based representations perform better.
BLEU scores behave differently:
the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic.
A possible explanation for this phenomenon %
is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in
Arabic-to-English
the char-based model reduces the number of generated unknown words %
in the MT %
test set by 25\%, while in
English-to-Arabic
the number of unknown words %
remains roughly the same between word-based %
and char-based models.
\section{Related Work} \label{sec:related-work}
\paragraph{Analysis of neural models}
The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks
that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations
illuminate the inner workings of the network,
they are often qualitative in nature and somewhat anecdotal.
A different approach tries
to provide a quantitative analysis by correlating parts of the neural %
network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from
a neural MT
encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria.
\newcite{vylomova2016word} also analyze different %
representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations.
\paragraph{Word representations in MT}
Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732}
and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training
using
a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping
the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT,
costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system.
\section{Encoder Analysis} \label{sec:enc-analysis}
Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags.
\subsection{Effect of word representation}
In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2};
see appendix \ref{sec:sup-training} for specific settings.
In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier.
Table~\ref{tab:results-all-pairs} shows
POS tagging accuracy
using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially
in the case of morphologically-richer languages like Arabic and Czech.
We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below
dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology
\cite{PASHA14.593.L14-1479}), indicating
that NMT models
learn quite
good representations.}
The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}.
\paragraph{Impact of word frequency}
Let us look more closely at an example case: Arabic POS and morphological tagging.
Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is
true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but
more so on
OOV words (+37.6\% in POS, +32.7\% in morphology).
Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples
to make a decision.
\paragraph{Analyzing specific tags}
In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders).
While the char-based representations are overall better, the two models still share similar misclassified tags.
Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case,
relatively many
tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly
happens. This suggests that
char-based representations
are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model.
In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner
occur more frequently in the training set and are better predicted by char-based compared to word-based representations.
There are a few
fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations:
mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural).
The char-based model is thus especially good with frequent tags and infrequent words, which
is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs.
\subsection{Effect of encoder depth}
Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand
what kind of information different layers capture. Given a trained
model with multiple layers, we extract
representations from the
different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We
vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder
for simplicity ($l \in \{1,2\}$).
Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five
language pairs. The general trend is that passing word vectors through the
encoder improves POS
tagging, which can be explained by
contextual information contained in the representations after one layer. However,
it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure.
Figure~\ref{fig:layer-effect-lines} shows
that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments
(see
the supplementary material).
}
In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models).
Thus translation quality improves when adding layers but morphology quality degrades.
Intuitively, it seems that lower layers of the network learn to represent word structure
while higher layers focus more
on word meaning.
A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}.
\subsection{Effect of target language}
While translating from morphologically-rich languages is
challenging,
translating into such languages is even harder.
For instance, our basic system obtains BLEU
of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech.
How does the target language affect
the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on
different target languages.
For example, given an Arabic source
we train Arabic-to-English/Hebrew/German
systems.
These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German).
To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations.
Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as
corresponding BLEU scores. As expected, translating to
English is easier than translating to
the morphologically-richer Hebrew and German, resulting in higher BLEU.
Despite their similar morphologies,
translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the
richer Hebrew morphology
compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to
English are better for predicting POS or morphology than those learned when translating to
German, which are in turn better than those learned when translating to
Hebrew. This is remarkable given that English is a morphologically-poor language that
does not display many of the
morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them
would make the model learn more about morphology.
A possible explanation for this phenomenon is that the Arabic-English model is simply better
than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent
difficulty in translating Arabic to Hebrew/German
may affect the ability to learn good representations of word structure.
To probe this more, we
trained an Arabic-Arabic autoencoder
on the same training data. We found that it learns
to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This
implies that higher BLEU does
not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case.
We found
this to be consistently true also for char-based experiments,
and in other language pairs.
|
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | Table 2: POS accuracy on gold and predicted tags using word-based and character-based representations, as well as corresponding BLEU scores. | [
"[EMPTY]",
"Gold",
"Pred",
"BLEU"
] | [
[
"[EMPTY]",
"Word/Char",
"Word/Char",
"Word/Char"
],
[
"Ar-En",
"80.31/93.66",
"89.62/95.35",
"24.7/28.4"
],
[
"Ar-He",
"78.20/92.48",
"88.33/94.66",
"9.9/10.7"
],
[
"De-En",
"87.68/94.57",
"93.54/94.63",
"29.6/30.4"
],
[
"Fr-En",
"–",
"94.61/95.55",
"37.8/38.8"
],
[
"Cz-En",
"–",
"75.71/79.10",
"23.2/25.4"
]
] | Char-based models always generate better representations for POS tagging, especially in the case of morphologically-richer languages like Arabic and Czech. We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively. The inherent difficulty in translating Arabic to Hebrew/German may affect the ability to learn good representations of word structure. To probe this more, we trained an Arabic-Arabic autoencoder on the same training data. However, its word representations are actually inferior for the purpose of POS/morphological tagging. This implies that higher BLEU does not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case. We found this to be consistently true also for char-based experiments, and in other language pairs. | \section{Motivation} \label{sec:motivation}
Translating morphologically-rich languages is especially difficult due to a large vocabulary size and a high level of sparsity. Different solutions have been proposed to deal with this problem, for example factored models in phrase-based MT~\cite{koehn-hoang:2007:EMNLP-CoNLL2007} or softmax approximations in neural MT~\cite{ruder-softmax}. More recently, neural MT systems have shown significant gains by exploiting characters and other sub-word units~\cite{costajussa-fonollosa:2016:P16-2,sennrich-haddow-birch:2016:P16-12,wu2016google}. Presumably, such models are better than word-based models in representing the structure of rare and unseen words. Indeed, \newcite{sennrich-haddow-birch:2016:P16-12} have found that the unigram translation accuracy of words decreases for lower-frequency words. They also observed somewhat different behavior when translating into different languages. It is less clear, however, what and how neural translation models learn about word structure. In this work we are interested in answering questions such as the following:
\begin{itemize}
\item How do character-based models improve neural MT?
\item What components of the NMT system encoder word structure?
\item How does the target language affect the learning of word structure?
\item What is the role of the decoder in learning word representations?
\end{itemize}
In the next section, we describe our data-driven approach for addressing such questions. We aim to obtain quantitative answers that will lead to generalizable conclusions.
\section{Methodology}
Given a source sentence $s = \{w_1, w_2, ..., w_N\}$ and a target sentence $t=\{u_1, u_2, ..., u_M\}$, we first
generate a vector representation for the source sentence
using
an encoder (Eqn.\ \ref{eq:enc})
and then
map this vector to the target sentence
using
a decoder (Eqn.\ \ref{eq:dec})
\cite{sutskever2014sequence}:
\begin{align}
&\texttt{ENC}: s=\{w_1, w_2, ..., w_N\} \mapsto \sss \in \reals^k \label{eq:enc} \\
&\texttt{DEC} : \sss \in \reals^k \mapsto t=\{u_1, u_2, ..., u_M\} \label{eq:dec}
\end{align}
In this work,
we use long short-term memory (LSTM) \cite{hochreiter1997long} encoder-decoders with attention \cite{bahdanau2014neural},
which we train on parallel data.
After training the NMT system, we freeze the parameters of the encoder and use \texttt{ENC} as a feature extractor to generate vectors representing
words in the sentence. Let $\texttt{ENC}_i(s)$ denote the encoded representation of word $w_i$. For example, this may be the output of the LSTM after word $w_i$. We
feed $\texttt{ENC}_i(s)$ to a neural
classifier that is trained to predict POS or morphological tags and evaluate the quality of the representation based on our ability to train a good classifier.
By comparing the performance of classifiers trained with features from different instantiations of \texttt{ENC}, we can evaluate what
MT encoders learn about word structure. Figure \ref{fig:approach} illustrates this process.
We follow a similar procedure
for analyzing representation learning in $\texttt{DEC}$.
The
classifier itself can be modeled in different ways. For example, it may be an LSTM over outputs of the encoder. However, as we are interested in assessing the quality of the representations learned by the MT system,
we choose to model the classifier as a simple feed-forward
network with one hidden layer and a ReLU non-linearity. Arguably, if the learned representations are good, then a non-linear classifier should be able to extract useful information from them.\footnote{We also experimented with a linear classifier and observed similar trends to the non-linear case, but overall lower results; \newcite{qian-qiu-huang:2016:P16-11} reported similar findings.}
We emphasize that our goal is not to beat the state-of-the-art on a given task, but rather to analyze what NMT models %
learn about morphology.
The classifier is trained with a cross-entropy loss;
more details on
its architecture are
in the supplementary material.
\documentclass[11pt,a4paper]{article}
\usepackage[hyperref]{acl2017}
\usepackage[normalem]{ulem}
% http://ctan.org/pkg/pifont
\newcommand{\cmark}{\ding{51}}%
\newcommand{\xmark}{\ding{55}}%
\newcommand\alert[1]{{\textcolor{red}{#1}}}
\aclfinalcopy % Uncomment this line for the final submission
\def\aclpaperid{496} % Enter the acl Paper ID here
\newcommand\BibTeX{B{\sc ib}\TeX}
\newcommand{\reals}{\mathbb{R}}
\newcommand{\xx}{\mathbf{x}}
\newcommand{\ii}{\mathbf{i}}
\newcommand{\ff}{\mathbf{f}}
\newcommand{\oo}{\mathbf{o}}
\newcommand{\cc}{\mathbf{c}}
\newcommand{\bb}{\mathbf{b}}
\newcommand{\hh}{\mathbf{h}}
\newcommand{\uu}{\mathbf{u}}
\newcommand{\ww}{\mathbf{w}} % word representation
\newcommand{\sss}{\mathbf{s}} % sentence representation
\newcommand{\WW}{\mathbf{W}}
\newcommand{\mm}{\mathbf{m}} % memory
\newcommand{\aaa}{\mathbf{a}} % attention
\newcommand{\rr}{\mathbf{r}} % attention
\newcommand{\zz}{\mathbf{z}} % noise
\title{What do Neural Machine Translation Models Learn about Morphology?}
\author{Yonatan Belinkov$^1$ ~~ Nadir Durrani$^2$ ~~ Fahim Dalvi$^2$ ~~ Hassan Sajjad$^2$ ~~ James Glass$^1$ \\\\
$^1$MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA 02139, USA \\
{\tt \{belinkov, glass\}@mit.edu} \\
$^2$Qatar Computing Research Institute, HBKU, Doha, Qatar \\
{\tt \{ndurrani, faimaduddin, hsajjad\}@qf.org.qa}
}
\date{}
\begin{document}
\maketitle
\begin{framed}
\noindent This is a modified version of a paper originally published at ACL 2017 with updated results and discussion in section 5.
\end{framed}
\begin{abstract}
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process.
In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the
representations
for learning morphology
through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs.\ character-based representations, depth of the encoding layer, the identity
of the target language, and encoder vs.\ decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.\footnote{Our code is available at \url{https://github.com/boknilev/nmt-repr-analysis}.}
\end{abstract}
\input{introduction}
\input{methodology}
\input{data}
\input{encoder-analysis}
\input{decoder-analysis}
\input{related-work}
\input{conclusion}
\section*{Acknowledgments}
We would like to thank Helmut Schmid for providing the Tiger corpus, members of the MIT Spoken Language Systems group for helpful comments, and the three anonymous reviewers for their useful suggestions.
This research was carried out in collaboration between the HBKU Qatar Computing Research Institute (QCRI) and the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
\bibliographystyle{acl_natbib}
\newpage
\appendix
\input{supplement}
\end{document}
\section{Conclusion}
Neural networks have become ubiquitous in machine translation due to their elegant architecture and good performance. The representations they use for linguistic units are crucial for obtaining high-quality translation. In this work,
we investigated how neural MT models learn word structure. We evaluated their representation quality on POS and morphological tagging in a number of languages. Our results lead to the following conclusions:
\begin{itemize}%[itemsep=1pt,topsep=5pt]%[leftmargin=*]
\item Character-based representations are better than word-based ones for learning morphology, especially in rare and unseen words.
\item Lower layers of the neural network are better at capturing morphology, while deeper networks improve translation performance. We hypothesize that lower layers are more focused on word structure, while higher ones are focused on word meaning.
\item Translating into morphologically-poorer languages leads to better source-side
representations. This is partly, but not completely, %
correlated with BLEU scores. %
\item There are only little differences between encoder and decoder representation quality. The attention mechanism does not seem to significantly affect the quality of the decoder representations, while it is important for the encoder representations.
\end{itemize}
These insights can guide further development of neural MT systems. For instance, jointly learning translation and morphology can possibly lead to better representations and improved translation. Our analysis indicates that this kind of approach should take into account factors such as the encoding layer and the type of word representation.
Another area for future work is to extend the analysis to other word %
representations (e.g.\ byte-pair encoding),
deeper networks,
and more
semantically-oriented tasks such as
semantic role-labeling or
semantic parsing.
\section{Supplementary Material}
\label{sec:supplemental}
\subsection{Training Details} \label{sec:sup-training}
\paragraph{POS/Morphological classifier} The classifier used for all prediction tasks is a feed-forward network with one hidden layer, dropout ($\rho=0.5$), a ReLU non-linearity, and an output layer mapping to the tag set (followed by a Softmax). The size of the hidden layer is set to be identical to the size of the encoder's hidden state (typically 500 dimensions). We use Adam \cite{kingma2014adam} with default parameters to minimize the cross-entropy objective.
Training is run with
mini-batches of size 16 and stopped once the loss on the dev set stops improving; we allow a patience of 5 epochs.
\paragraph{Neural MT system}
We train a 2-layer LSTM encoder-decoder with attention.
We use the \texttt{seq2seq-attn} implementation \cite{kim2016} with the following default settings: word vectors and LSTM states have 500 dimensions, SGD with initial learning rate of 1.0 and rate decay of 0.5, and dropout rate of 0.3. The character-based model is a CNN with a highway network over characters \cite{kim2015character} with 1000 feature maps and a
kernel width of 6 characters. This model was found to be
useful for translating morphologically-rich languages
\cite{costajussa-fonollosa:2016:P16-2}. The MT system is trained for 20 epochs, and the model with the best dev loss is
used for extracting features for the classifier.
\subsection{Data and Taggers} \label{sec:sup-data}
\paragraph{Datasets}
All of the translation models are trained on the Ted talks corpus included in WIT$^3$ \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016}. Statistics about each language pair are available on the WIT$^3$ website: \url{https://wit3.fbk.eu}.
For experiments using gold tags, we used the Arabic Treebank for Arabic (with the versions and splits described in the MADAMIRA manual \cite{PASHA14.593.L14-1479}) and the Tiger corpus for German.\footnote{\url{http://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger.html}}
\paragraph{POS and morphological taggers}
We used the following tools to annotate the MT corpora: MADAMIRA \cite{PASHA14.593.L14-1479} for Arabic POS and morphological tags, Tree-Tagger \cite{schmid:2004:PAPERS} for Czech and French POS tags, LoPar \cite{schmid:00a} for German POS and morphological tags, and MXPOST \cite{ratnaparkhi98maximum} for English POS tags.
These tools are recommended
on the Moses website.\footnote{\url{http://www.statmt.org/moses/?n=Moses.ExternalTools}}
As mentioned before, our goal is not to achieve state-of-the-art
results, but rather to study what different components of the NMT architecture learn about word morphology.
Please refer to \newcite{mueller-schmid-schutze:2013:EMNLP} for representative POS and morphological tagging accuracies.
\subsection{Supplementary Results}
\label{sec:sup-results}
We report here results that were omitted from the paper due to the space limit.
Table \ref{tab:different_layers} shows encoder results using
different layers,
languages,
and
representations (word/char-based).
As noted in the paper,
all
the results consistently show that i) layer 1 performs better than
layers 0 and 2;
and
ii) char-based representations are better than word-based for learning morphology.
Table \ref{tab:different_language}
shows that translating into a morphologically-poor language (English) leads to better source
representations, and Table \ref{tab:decoder} provides
additional decoder results.
Table~\ref{tab:decoder-old} shows POS tagging accuracy using decoder representations, where the current word representation was used to predict the next word's tag. The idea is to evaluate whether the current word representation contains POS information about the output of the decoder. Clearly, the current word representation cannot be used to predict the next word's tag. This also holds when removing the attention (En-Ar, 85.54\%) or using character-based representations (En-Ar, 44.5\%).
Since the decoder representation is in the pre-Softmax layer, this means that most of the work of predicting the next work is done in the Softmax layer, while the pre-Softmax representation contains much information about the current input word.
\newpage
\section{Introduction}
Neural network models are quickly becoming the predominant approach to machine translation (MT). Training neural MT (NMT) models can be done in an end-to-end fashion, which is simpler and more elegant than traditional MT systems. Moreover, NMT systems have become competitive with, or better than, the previous state-of-the-art, especially since the introduction of sequence-to-sequence models and the attention mechanism \cite{bahdanau2014neural,sutskever2014sequence}.
The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation \cite{luong-manning:iwslt15,bentivogli-EtAl:2016:EMNLP2016,toral-sanchezcartagena:2017:EACLlong}.
\bigskip
However,
little is known about what and how much these models
learn about each language and its features.
Recent work has started exploring the role of the NMT encoder in learning source syntax \cite{shi-padhi-knight:2016:EMNLP2016},
but
research studies are yet to answer important questions such as: \textit{(i)} what do NMT models learn about word morphology? \textit{(ii)} what is the effect on learning when translating into/from morphologically-rich languages? \mbox{\textit{(iii)} what} impact do different representations (character vs.\ word) have on learning? and \textit{(iv)} what do different modules learn about the syntactic and semantic structure of a language?
Answering such questions is imperative for fully understanding the NMT architecture. In this paper, we strive towards exploring \textit{(i)}, \textit{(ii)}, and \textit{(iii)} by providing quantitative, data-driven answers to the following specific questions:
\begin{itemize}%[itemsep=5pt,topsep=8pt] %[leftmargin=*]
\item Which parts of the NMT architecture capture word structure?
\item
What is the division of labor between different components (e.g.\ different layers or %of
encoder vs.\ decoder)?
\item How do different word representations help learn better morphology and modeling of infrequent words?
\item How does the target language affect the learning of word structure?
\end{itemize}
To achieve this, we follow a simple but effective procedure with three steps: \mbox{\textit{(i)} train} a neural MT system on a parallel corpus; \mbox{\textit{(ii)} use} the trained model to extract feature representations for words in a language of interest; and \mbox{\textit{(iii)} train} a classifier using extracted features to make predictions for another task.
We then evaluate the quality of the trained classifier on the given task as a proxy to the quality of the extracted representations. In this way, we obtain a quantitative measure of how well the original MT system learns features that are relevant to the given task.
We focus on the tasks of part-of-speech (POS) and full morphological tagging. We investigate how different neural MT systems capture POS and morphology through a series of experiments along several parameters. For instance, we contrast word-based and character-based representations, use different encoding layers, vary source and target languages, and compare extracting features from the encoder vs.\ the decoder.
We experiment with several languages with varying degrees of morphological richness:
French, German, Czech, Arabic, and Hebrew.\
Our analysis reveals interesting insights such as:
\begin{itemize}%[itemsep=3pt,topsep=5pt]%[leftmargin=*]
\item Character-based representations are much better for learning morphology,
especially for low-frequency words. This improvement is correlated with better BLEU scores. On the other hand, word-based models are sufficient for learning the structure of common words.
\item Lower layers of the
encoder are better at capturing word structure, while deeper networks improve translation quality,
suggesting that higher layers focus more
on word meaning.
\item The target language impacts the kind of information learned by the MT system. Translating into morphologically-poorer languages leads to better source-side word representations. This is partly, but not completely, correlated with BLEU scores.
\item The NMT encoder and decoder learn representations of similar quality. The attention mechanism affects the quality of the encoder representations more than that of the decoder representations.
\end{itemize}
\section{Data}
\paragraph{Language pairs
} We experiment with several language pairs, including morphologically-rich languages, that have received relatively significant attention in the MT community. These include Arabic-, German-, French-, and Czech-English pairs. To broaden
our analysis
and study the effect of having morphologically-rich languages on both source and target sides, we also
include
Arabic-Hebrew, two languages with rich and similar morphological systems, and Arabic-German, two languages with rich but different morphologies.
\paragraph{MT
data}
Our translation
models are trained on the WIT$^3$ corpus of TED talks \cite{cettoloEtAl:EAMT2012,cettolol:SeMaT:2016} made available for IWSLT 2016. This allows for comparable and cross-linguistic analysis. Statistics about each language pair are given in Table \ref{tab:tagsets} (under Pred).
We use official dev and test sets for tuning and testing. Reported figures are the averages over test sets.
\paragraph{Annotated data}
We use two kinds of datasets to train POS and morphological classifiers: gold-standard and predicted tags. For predicted tags, we simply used freely available taggers
to annotate the MT data. For gold tags, we use gold-annotated datasets. Table \ref{tab:tagsets} gives
statistics for datasets with gold and predicted tags; see
supplementary material
for
details on
taggers and gold data.
We train and test our classifiers on predicted annotations, and similarly on gold annotations, when we have them. We report both results wherever available.
\section{Decoder Analysis} \label{sec:dec-analysis}
So far we only looked at the encoder. However, the decoder \texttt{DEC}
is a crucial part in an MT system with access to both source and target sentences. In order to examine what the decoder learns about morphology, we first train an NMT system on the parallel corpus. Then, we use the trained model to encode a source sentence and extract features for words in the target sentence.
These features are used to train a classifier on POS or morphological tagging on the target side.\footnote{In this section we only experiment with
predicted tags
as there are no
parallel data
with gold POS/morphological
tags
that we are aware of.} %
Note that in this case the decoder is given the correct target words one-by-one, similar to the
usual NMT training regime.
Table \ref{tab:pos-dec-enc-attn-nogold} (1st row) shows the results of using
representations extracted with \texttt{ENC}
and \texttt{DEC}
from the Arabic-English and English-Arabic models, respectively.
There is a modest drop in representation quality with the decoder. This drop may be correlated with lower BLEU scores when translating English to Arabic vs.\ Arabic to English. We observed simmilar small drops with higher quality translation directions (Table~\ref{tab:decoder}, Appendix~\ref{sec:sup-results}).
The little gap between encoder and decoder representations may sound surprising, when we consider the fundamental tasks of the two modules. The encoder's task is to create a generic, close to language-independent representation of the source sentence, as shown by recent evidence from multilingual NMT \cite{johnson2016google}. The decoder's task is
to use this representation to generate the
target sentence in a specific language.
One might conjecture that it would be sufficient for the decoder to learn a strong language model in order %
to produce morphologically-correct output, without learning much about morphology, while the encoder needs to learn quite a lot about source language morphology in order to create a good generic representation. However, their performance seems more or less comparable.
In the following section we investigate what the role of the attention mechanism in the division of labor between encoder and decoder.
\subsection{Effect of attention}
Consider the role of the attention mechanism in learning useful representations: during decoding, the attention weights are combined with the decoder's hidden states to generate the current translation. These two sources of information need to jointly point to the most relevant source word(s) and predict the next most likely word. Thus, the decoder puts significant emphasis on mapping back to the source sentence, which may come at the expense of obtaining a meaningful representation of the current word. We hypothesize that the attention mechanism might hurt
the quality of the target word representations learned by the decoder.
To test this hypothesis, we train NMT models with and without attention and compare the quality of their learned representations.
As Table~\ref{tab:pos-dec-enc-attn-nogold} shows (compare 1st and 2nd rows), removing the attention mechanism decreases the quality of the encoder representations significantly, but only mildly hurts the quality of the decoder representations. It seems that the decoder does not rely on the attention mechanism to obtain good target word representations, contrary to our hypothesis.
\subsection{Effect of word representation}
We also conducted experiments to verify our findings regarding word-based versus character-based representations on the decoder side. By character representation we mean a character CNN on the input words. The decoder predictions are still done at the word-level, which enables us to use its hidden states as word representations.
Table~\ref{tab:pos-dec-enc-word-char-nogold} shows POS accuracy of word-based %
vs.\ char-based representations in the encoder and decoder.
In both bases, char-based representations perform better.
BLEU scores behave differently:
the char-based model leads to better translations in Arabic-to-English, but not in English-to-Arabic.
A possible explanation for this phenomenon %
is that the decoder's predictions are still done at word level even with the char-based model (which encodes the target input but not the output). In practice, this can lead to generating unknown words. Indeed, in
Arabic-to-English
the char-based model reduces the number of generated unknown words %
in the MT %
test set by 25\%, while in
English-to-Arabic
the number of unknown words %
remains roughly the same between word-based %
and char-based models.
\section{Related Work} \label{sec:related-work}
\paragraph{Analysis of neural models}
The opacity of neural networks has motivated researchers to analyze such models in different ways. One line of work visualizes hidden unit activations in recurrent neural networks
that are trained for a given task \cite{elman1991distributed,karpathy2015visualizing,kadar2016representation,qian-qiu-huang:2016:EMNLP2016}. While such visualizations
illuminate the inner workings of the network,
they are often qualitative in nature and somewhat anecdotal.
A different approach tries
to provide a quantitative analysis by correlating parts of the neural %
network with linguistic properties, for example by training a classifier to predict features of interest. Different units have been used, from word embeddings \cite{kohn:2015:EMNLP,qian-qiu-huang:2016:P16-11}, through LSTM gates or states \cite{qian-qiu-huang:2016:EMNLP2016}, to sentence embeddings \cite{adi2016fine}. Our work is most similar to \newcite{shi-padhi-knight:2016:EMNLP2016}, who use hidden vectors from
a neural MT
encoder to predict syntactic properties on the English source side. In contrast, we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria.
\newcite{vylomova2016word} also analyze different %
representations for morphologically-rich languages in MT, but do not directly measure the quality of the learned representations.
\paragraph{Word representations in MT}
Machine translation systems that deal with morphologically-rich languages resort to various techniques for representing morphological knowledge, such as word segmentation \cite{C00-2162,E03-1076,Badr:2008:SES:1557690.1557732}
and factored translation and reordering models \cite{koehn-hoang:2007:EMNLP-CoNLL2007,durrani-EtAl:2014:Coling}. Characters and other sub-word units have become increasingly popular in neural MT, although they had also been used in phrase-based MT for handling morphologically-rich \cite{Luong:D10-1015} or closely related language pairs \cite{durrani-EtAl:2010:ACL,Nakov:Tiedemann:2012}. In neural MT, such units are obtained in a pre-processing step---e.g.\ by byte-pair encoding \cite{sennrich-haddow-birch:2016:P16-12} or the word-piece model \cite{wu2016google}---or learned during training
using
a character-based convolutional/recurrent sub-network \cite{costajussa-fonollosa:2016:P16-2,Luong:P16-1100,vylomova2016word}. The latter approach has the advantage of keeping
the original word boundaries without requiring pre- and post-processing. Here we focus on a character CNN which has been used in language modeling and machine translation \cite{kim2015character,belinkov-glass:2016:SeMaT,
costajussa-fonollosa:2016:P16-2,jozefowicz2016exploring,sajjad:2017:ACL}. We evaluate the quality of different representations learned by an MT system augmented with a character CNN in terms of POS and morphological tagging, and contrast them with a purely word-based system.
\section{Encoder Analysis} \label{sec:enc-analysis}
Recall that after training the NMT system we freeze its parameters and use it only to generate features for the POS/morphology classifier. Given a trained encoder \texttt{ENC} and a sentence $s$ with POS/morphology annotation, we generate word features $\texttt{ENC}_i(s)$ for every word in the sentence. We then train a classifier that uses the features $\texttt{ENC}_i(s)$ to predict POS or morphological tags.
\subsection{Effect of word representation}
In this section, we compare different word representations extracted with different encoders. Our word-based model uses a word embedding matrix which is initialized randomly and learned with other NMT parameters. For a character-based model we adopt a convolutional neural network (CNN) over character embeddings that is also learned during training \cite{kim2015character,costajussa-fonollosa:2016:P16-2};
see appendix \ref{sec:sup-training} for specific settings.
In both cases we run the encoder over these representations and use its output $\texttt{ENC}_i(s)$ as features for the classifier.
Table~\ref{tab:results-all-pairs} shows
POS tagging accuracy
using features from different NMT encoders. Char-based models always generate better representations for POS tagging, especially
in the case of morphologically-richer languages like Arabic and Czech.
We observed a similar pattern in the full morphological tagging task. For example, we obtain morphological tagging accuracy of 65.2/79.66 and 67.66/81.66 using word/char-based representations from the Arabic-Hebrew and Arabic-English encoders, respectively.\footnote{The results are not far below
dedicated taggers (e.g.\ 95.1/84.1 on Arabic POS/morphology
\cite{PASHA14.593.L14-1479}), indicating
that NMT models
learn quite
good representations.}
The superior morphological power of the char-based model also manifests in better translation quality (measured by BLEU), as shown in Table~\ref{tab:results-all-pairs}.
\paragraph{Impact of word frequency}
Let us look more closely at an example case: Arabic POS and morphological tagging.
Figure~\ref{fig:repr} shows the effect of using word-based vs.\ char-based feature representations, obtained from the encoder of the Arabic-Hebrew system (other language pairs exhibit similar trends). Clearly, the char-based model is superior to the word-based one. This is
true for the overall accuracy (+14.3\% in POS, +14.5\% in morphology), but
more so on
OOV words (+37.6\% in POS, +32.7\% in morphology).
Figure~\ref{fig:repr-freqs} shows that the gap between word-based and char-based representations increases as the frequency of the word in the training data decreases. In other words, the more frequent the word, the less need there is for character information. These findings make intuitive sense: the char-based model is able to learn character n-gram patterns that are important for identifying word structure, but as the word becomes more frequent the word-based model has seen enough examples
to make a decision.
\paragraph{Analyzing specific tags}
In Figure~\ref{fig:repr-pos-cm} we plot confusion matrices for POS tagging using word-based and char-based representations (from Arabic encoders).
While the char-based representations are overall better, the two models still share similar misclassified tags.
Much of the confusion comes from wrongly predicting nouns (NN, NNP). In the word-based case,
relatively many
tags with determiner (DT+NNP, DT+NNPS, DT+NNS, DT+VBG) are wrongly predicted as non-determined nouns (NN, NNP). In the char-based case, this hardly
happens. This suggests that
char-based representations
are predictive of the presence of a determiner, which in Arabic is expressed as the prefix ``Al-'' (the definite article), a pattern easily captured by a char-based model.
In Figure~\ref{fig:repr-pos-tag-freq} we plot the difference in POS accuracy when moving from word-based to char-based representations, per POS tag frequency in the training data. Tags closer to the upper-right corner
occur more frequently in the training set and are better predicted by char-based compared to word-based representations.
There are a few
fairly frequent tags (in the middle-bottom part of the figure) whose accuracy does not improve much when moving from word- to char-based representations:
mostly conjunctions, determiners, and certain particles (CC, DT, WP). But there are several very frequent tags (NN, DT+NN, DT+JJ, VBP, and even PUNC) whose accuracy improves quite a lot. Then there are plural nouns (NNS, DT+NNS) where the char-based model really shines, which makes sense linguistically as plurality in Arabic is usually expressed by certain suffixes (\mbox{``-wn/yn''} for masc. plural, \mbox{``-At''} for fem. plural).
The char-based model is thus especially good with frequent tags and infrequent words, which
is understandable given that infrequent words typically belong to frequent open categories like nouns and verbs.
\subsection{Effect of encoder depth}
Modern NMT systems use very deep architectures with up to 8 or 16 layers \cite{wu2016google,TACL863}. We would like to understand
what kind of information different layers capture. Given a trained
model with multiple layers, we extract
representations from the
different layers in the encoder. Let $\texttt{ENC}^l_i(s)$ denote the encoded representation of word $w_i$ after the $l$-th layer. We
vary $l$ and train different classifiers to predict POS or morphological tags. Here we focus on the case of a 2-layer encoder-decoder
for simplicity ($l \in \{1,2\}$).
Figure~\ref{fig:layer-effect-all-langs} shows POS tagging results using representations from different encoding layers across five
language pairs. The general trend is that passing word vectors through the
encoder improves POS
tagging, which can be explained by
contextual information contained in the representations after one layer. However,
it turns out that representations from the 1st layer are better than those from the 2nd layer, at least for the purpose of capturing word structure.
Figure~\ref{fig:layer-effect-lines} shows
that the same pattern holds for both word-based and char-based representations, on Arabic POS and morphological tagging. In all cases, layer 1 representations are better than layer 2 representations.\footnote{We found this result to be also true in French, German, and Czech experiments
(see
the supplementary material).
}
In contrast, BLEU scores actually increase when training \mbox{2-layer} vs.\ \mbox{1-layer} models (+1.11/+0.56 BLEU for Arabic-Hebrew word/char-based models).
Thus translation quality improves when adding layers but morphology quality degrades.
Intuitively, it seems that lower layers of the network learn to represent word structure
while higher layers focus more
on word meaning.
A similar pattern was recently observed in a joint language-vision deep recurrent net~\cite{gelderloos-chrupala:2016:COLING}.
\subsection{Effect of target language}
While translating from morphologically-rich languages is
challenging,
translating into such languages is even harder.
For instance, our basic system obtains BLEU
of 24.69/23.2 on Arabic/Czech to English, but only 13.37/13.9 on English to Arabic/Czech.
How does the target language affect
the learned source language representations? Does translating into a morphologically-rich language require more knowledge about source language morphology? In order to investigate these questions, we fix the source language and train NMT models on
different target languages.
For example, given an Arabic source
we train Arabic-to-English/Hebrew/German
systems.
These target languages represent a morphologically-poor language (English), a morphologically-rich language with similar morphology to the source language (Hebrew), and a morphologically-rich language with different morphology (German).
To make a fair comparison, we train the models on the intersection of the training data based on the source language. In this way the experimental setup is completely identical: the models are trained on the same Arabic sentences with different translations.
Figure~\ref{fig:target-lang} shows POS and morphology accuracy of word-based representations from the NMT encoders, as well as
corresponding BLEU scores. As expected, translating to
English is easier than translating to
the morphologically-richer Hebrew and German, resulting in higher BLEU.
Despite their similar morphologies,
translating Arabic to Hebrew is worse than Arabic to German, which can be attributed to the
richer Hebrew morphology
compared to German. POS and morphology accuracies share an intriguing pattern: the representations that are learned when translating to
English are better for predicting POS or morphology than those learned when translating to
German, which are in turn better than those learned when translating to
Hebrew. This is remarkable given that English is a morphologically-poor language that
does not display many of the
morphological properties that are found in the Arabic source. In contrast, German and Hebrew have richer morphologies, so one could expect that translating into them
would make the model learn more about morphology.
A possible explanation for this phenomenon is that the Arabic-English model is simply better
than the Arabic-Hebrew and Arabic-German models, as hinted by the BLEU scores in Table \ref{tab:results-all-pairs}. The inherent
difficulty in translating Arabic to Hebrew/German
may affect the ability to learn good representations of word structure.
To probe this more, we
trained an Arabic-Arabic autoencoder
on the same training data. We found that it learns
to recreate the test sentences extremely well, with very high BLEU scores (Figure~\ref{fig:target-lang}). However, its word representations are actually inferior for the purpose of POS/morphological tagging. This
implies that higher BLEU does
not necessarily entail better morphological representations. In other words, a better translation model learns more informative representations, but only when it is actually learning to translate rather than merely memorizing the data as in the autoencoder case.
We found
this to be consistently true also for char-based experiments,
and in other language pairs.
|
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | Table 3: POS tagging accuracy using encoder and decoder representations with/without attention. | [
"Attn",
"POS Accuracy ENC",
"POS Accuracy DEC",
"BLEU Ar-En",
"BLEU En-Ar"
] | [
[
"✓",
"89.62",
"86.71",
"24.69",
"13.37"
],
[
"✗",
"74.10",
"85.54",
"11.88",
"5.04"
]
] | "There is a modest drop in representation quality with the decoder. This drop may be correlated with(...TRUNCATED) | "\\section{Motivation} \\label{sec:motivation}\n\nTranslating morphologically-rich languages is espe(...TRUNCATED) |
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | Table 4: POS tagging accuracy using word-based and char-based encoder/decoder representations. | [
"[EMPTY]",
"POS Accuracy ENC",
"POS Accuracy DEC",
"BLEU Ar-En",
"BLEU En-Ar"
] | [
[
"Word",
"89.62",
"86.71",
"24.69",
"13.37"
],
[
"Char",
"95.35",
"91.11",
"28.42",
"13.00"
]
] | "In both bases, char-based representations perform better. BLEU scores behave differently: the char-(...TRUNCATED) | "\\section{Motivation} \\label{sec:motivation}\n\nTranslating morphologically-rich languages is espe(...TRUNCATED) |
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | "Table 5: POS and morphology accuracy on predicted tags using word- and char-based representations f(...TRUNCATED) | [
"[EMPTY]",
"Layer 0",
"Layer 1",
"Layer 2"
] | [["[EMPTY]","Word/Char (POS)","Word/Char (POS)","Word/Char (POS)"],["De","91.1/92.0","93.6/95.2","93(...TRUNCATED) | "We report here results that were omitted from the paper due to the space limit. As noted in the pap(...TRUNCATED) | "\\section{Motivation} \\label{sec:motivation}\n\nTranslating morphologically-rich languages is espe(...TRUNCATED) |
What do Neural Machine Translation Models Learn about Morphology? | 1704.03471 | Table 7: POS accuracy and BLEU using decoder representations from different language pairs. | [
"[EMPTY]",
"En-De",
"En-Cz",
"De-En",
"Fr-En"
] | [
[
"POS",
"94.3",
"71.9",
"93.3",
"94.4"
],
[
"BLEU",
"23.4",
"13.9",
"29.6",
"37.8"
]
] | "There is a modest drop in representation quality with the decoder. This drop may be correlated with(...TRUNCATED) | "\\section{Motivation} \\label{sec:motivation}\n\nTranslating morphologically-rich languages is espe(...TRUNCATED) |
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | 1701.06538 | Table 8: Model comparison on 100 Billion Word Google News Dataset | ["Model","Test Perplexity","Test Perplexity","ops/timestep (millions)","#Params excluding embed. & s(...TRUNCATED) | [["[EMPTY]",".1 epochs","1 epoch","[EMPTY]","(millions)","(billions)","(observed)"],["Kneser-Ney 5-g(...TRUNCATED) | ": We evaluate our model using perplexity on a holdout dataset. Perplexity after 100 billion trainin(...TRUNCATED) | "\\documentclass{article} % For LaTeX2e\n\\pdfoutput=1\n\n\n\n\n\n\n\n\n\n\n\n\n\\usepackage[T1]{fon(...TRUNCATED) |
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer | 1701.06538 | Table 6: Experiments with different combinations of losses. | ["[ITALIC] wimportance","[ITALIC] wload","Test Perplexity","[ITALIC] CV( [ITALIC] Importance( [ITALI(...TRUNCATED) | [["0.0","0.0","39.8","3.04","3.01","17.80"],["0.2","0.0","[BOLD] 35.6","0.06","0.17","1.47"],["0.0",(...TRUNCATED) | "All the combinations containing at least one the two losses led to very similar model quality, wher(...TRUNCATED) | "\\documentclass{article} % For LaTeX2e\n\\pdfoutput=1\n\n\n\n\n\n\n\n\n\n\n\n\n\\usepackage[T1]{fon(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 33